Dec  1 03:12:42 np0005540697 kernel: Linux version 5.14.0-642.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-68.el9) #1 SMP PREEMPT_DYNAMIC Thu Nov 20 14:15:03 UTC 2025
Dec  1 03:12:42 np0005540697 kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Dec  1 03:12:42 np0005540697 kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64 root=UUID=b277050f-8ace-464d-abb6-4c46d4c45253 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Dec  1 03:12:42 np0005540697 kernel: BIOS-provided physical RAM map:
Dec  1 03:12:42 np0005540697 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Dec  1 03:12:42 np0005540697 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Dec  1 03:12:42 np0005540697 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Dec  1 03:12:42 np0005540697 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Dec  1 03:12:42 np0005540697 kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Dec  1 03:12:42 np0005540697 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Dec  1 03:12:42 np0005540697 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Dec  1 03:12:42 np0005540697 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Dec  1 03:12:42 np0005540697 kernel: NX (Execute Disable) protection: active
Dec  1 03:12:42 np0005540697 kernel: APIC: Static calls initialized
Dec  1 03:12:42 np0005540697 kernel: SMBIOS 2.8 present.
Dec  1 03:12:42 np0005540697 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Dec  1 03:12:42 np0005540697 kernel: Hypervisor detected: KVM
Dec  1 03:12:42 np0005540697 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Dec  1 03:12:42 np0005540697 kernel: kvm-clock: using sched offset of 3607972127 cycles
Dec  1 03:12:42 np0005540697 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Dec  1 03:12:42 np0005540697 kernel: tsc: Detected 2799.998 MHz processor
Dec  1 03:12:42 np0005540697 kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Dec  1 03:12:42 np0005540697 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Dec  1 03:12:42 np0005540697 kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Dec  1 03:12:42 np0005540697 kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Dec  1 03:12:42 np0005540697 kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Dec  1 03:12:42 np0005540697 kernel: Using GB pages for direct mapping
Dec  1 03:12:42 np0005540697 kernel: RAMDISK: [mem 0x2d83a000-0x32c14fff]
Dec  1 03:12:42 np0005540697 kernel: ACPI: Early table checksum verification disabled
Dec  1 03:12:42 np0005540697 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Dec  1 03:12:42 np0005540697 kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec  1 03:12:42 np0005540697 kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec  1 03:12:42 np0005540697 kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec  1 03:12:42 np0005540697 kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Dec  1 03:12:42 np0005540697 kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec  1 03:12:42 np0005540697 kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec  1 03:12:42 np0005540697 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Dec  1 03:12:42 np0005540697 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Dec  1 03:12:42 np0005540697 kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Dec  1 03:12:42 np0005540697 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Dec  1 03:12:42 np0005540697 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Dec  1 03:12:42 np0005540697 kernel: No NUMA configuration found
Dec  1 03:12:42 np0005540697 kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Dec  1 03:12:42 np0005540697 kernel: NODE_DATA(0) allocated [mem 0x23ffd5000-0x23fffffff]
Dec  1 03:12:42 np0005540697 kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Dec  1 03:12:42 np0005540697 kernel: Zone ranges:
Dec  1 03:12:42 np0005540697 kernel:  DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Dec  1 03:12:42 np0005540697 kernel:  DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Dec  1 03:12:42 np0005540697 kernel:  Normal   [mem 0x0000000100000000-0x000000023fffffff]
Dec  1 03:12:42 np0005540697 kernel:  Device   empty
Dec  1 03:12:42 np0005540697 kernel: Movable zone start for each node
Dec  1 03:12:42 np0005540697 kernel: Early memory node ranges
Dec  1 03:12:42 np0005540697 kernel:  node   0: [mem 0x0000000000001000-0x000000000009efff]
Dec  1 03:12:42 np0005540697 kernel:  node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Dec  1 03:12:42 np0005540697 kernel:  node   0: [mem 0x0000000100000000-0x000000023fffffff]
Dec  1 03:12:42 np0005540697 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Dec  1 03:12:42 np0005540697 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Dec  1 03:12:42 np0005540697 kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Dec  1 03:12:42 np0005540697 kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Dec  1 03:12:42 np0005540697 kernel: ACPI: PM-Timer IO Port: 0x608
Dec  1 03:12:42 np0005540697 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Dec  1 03:12:42 np0005540697 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Dec  1 03:12:42 np0005540697 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Dec  1 03:12:42 np0005540697 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Dec  1 03:12:42 np0005540697 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Dec  1 03:12:42 np0005540697 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Dec  1 03:12:42 np0005540697 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Dec  1 03:12:42 np0005540697 kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Dec  1 03:12:42 np0005540697 kernel: TSC deadline timer available
Dec  1 03:12:42 np0005540697 kernel: CPU topo: Max. logical packages:   8
Dec  1 03:12:42 np0005540697 kernel: CPU topo: Max. logical dies:       8
Dec  1 03:12:42 np0005540697 kernel: CPU topo: Max. dies per package:   1
Dec  1 03:12:42 np0005540697 kernel: CPU topo: Max. threads per core:   1
Dec  1 03:12:42 np0005540697 kernel: CPU topo: Num. cores per package:     1
Dec  1 03:12:42 np0005540697 kernel: CPU topo: Num. threads per package:   1
Dec  1 03:12:42 np0005540697 kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Dec  1 03:12:42 np0005540697 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Dec  1 03:12:42 np0005540697 kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Dec  1 03:12:42 np0005540697 kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Dec  1 03:12:42 np0005540697 kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Dec  1 03:12:42 np0005540697 kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Dec  1 03:12:42 np0005540697 kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Dec  1 03:12:42 np0005540697 kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Dec  1 03:12:42 np0005540697 kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Dec  1 03:12:42 np0005540697 kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Dec  1 03:12:42 np0005540697 kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Dec  1 03:12:42 np0005540697 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Dec  1 03:12:42 np0005540697 kernel: Booting paravirtualized kernel on KVM
Dec  1 03:12:42 np0005540697 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Dec  1 03:12:42 np0005540697 kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Dec  1 03:12:42 np0005540697 kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Dec  1 03:12:42 np0005540697 kernel: kvm-guest: PV spinlocks disabled, no host support
Dec  1 03:12:42 np0005540697 kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64 root=UUID=b277050f-8ace-464d-abb6-4c46d4c45253 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Dec  1 03:12:42 np0005540697 kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64", will be passed to user space.
Dec  1 03:12:42 np0005540697 kernel: random: crng init done
Dec  1 03:12:42 np0005540697 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Dec  1 03:12:42 np0005540697 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Dec  1 03:12:42 np0005540697 kernel: Fallback order for Node 0: 0 
Dec  1 03:12:42 np0005540697 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Dec  1 03:12:42 np0005540697 kernel: Policy zone: Normal
Dec  1 03:12:42 np0005540697 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Dec  1 03:12:42 np0005540697 kernel: software IO TLB: area num 8.
Dec  1 03:12:42 np0005540697 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Dec  1 03:12:42 np0005540697 kernel: ftrace: allocating 49313 entries in 193 pages
Dec  1 03:12:42 np0005540697 kernel: ftrace: allocated 193 pages with 3 groups
Dec  1 03:12:42 np0005540697 kernel: Dynamic Preempt: voluntary
Dec  1 03:12:42 np0005540697 kernel: rcu: Preemptible hierarchical RCU implementation.
Dec  1 03:12:42 np0005540697 kernel: rcu: #011RCU event tracing is enabled.
Dec  1 03:12:42 np0005540697 kernel: rcu: #011RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Dec  1 03:12:42 np0005540697 kernel: #011Trampoline variant of Tasks RCU enabled.
Dec  1 03:12:42 np0005540697 kernel: #011Rude variant of Tasks RCU enabled.
Dec  1 03:12:42 np0005540697 kernel: #011Tracing variant of Tasks RCU enabled.
Dec  1 03:12:42 np0005540697 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Dec  1 03:12:42 np0005540697 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Dec  1 03:12:42 np0005540697 kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Dec  1 03:12:42 np0005540697 kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Dec  1 03:12:42 np0005540697 kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Dec  1 03:12:42 np0005540697 kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Dec  1 03:12:42 np0005540697 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Dec  1 03:12:42 np0005540697 kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Dec  1 03:12:42 np0005540697 kernel: Console: colour VGA+ 80x25
Dec  1 03:12:42 np0005540697 kernel: printk: console [ttyS0] enabled
Dec  1 03:12:42 np0005540697 kernel: ACPI: Core revision 20230331
Dec  1 03:12:42 np0005540697 kernel: APIC: Switch to symmetric I/O mode setup
Dec  1 03:12:42 np0005540697 kernel: x2apic enabled
Dec  1 03:12:42 np0005540697 kernel: APIC: Switched APIC routing to: physical x2apic
Dec  1 03:12:42 np0005540697 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Dec  1 03:12:42 np0005540697 kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998)
Dec  1 03:12:42 np0005540697 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Dec  1 03:12:42 np0005540697 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Dec  1 03:12:42 np0005540697 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Dec  1 03:12:42 np0005540697 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Dec  1 03:12:42 np0005540697 kernel: Spectre V2 : Mitigation: Retpolines
Dec  1 03:12:42 np0005540697 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Dec  1 03:12:42 np0005540697 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Dec  1 03:12:42 np0005540697 kernel: RETBleed: Mitigation: untrained return thunk
Dec  1 03:12:42 np0005540697 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Dec  1 03:12:42 np0005540697 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Dec  1 03:12:42 np0005540697 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Dec  1 03:12:42 np0005540697 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Dec  1 03:12:42 np0005540697 kernel: x86/bugs: return thunk changed
Dec  1 03:12:42 np0005540697 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Dec  1 03:12:42 np0005540697 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Dec  1 03:12:42 np0005540697 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Dec  1 03:12:42 np0005540697 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Dec  1 03:12:42 np0005540697 kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Dec  1 03:12:42 np0005540697 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Dec  1 03:12:42 np0005540697 kernel: Freeing SMP alternatives memory: 40K
Dec  1 03:12:42 np0005540697 kernel: pid_max: default: 32768 minimum: 301
Dec  1 03:12:42 np0005540697 kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Dec  1 03:12:42 np0005540697 kernel: landlock: Up and running.
Dec  1 03:12:42 np0005540697 kernel: Yama: becoming mindful.
Dec  1 03:12:42 np0005540697 kernel: SELinux:  Initializing.
Dec  1 03:12:42 np0005540697 kernel: LSM support for eBPF active
Dec  1 03:12:42 np0005540697 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Dec  1 03:12:42 np0005540697 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Dec  1 03:12:42 np0005540697 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Dec  1 03:12:42 np0005540697 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Dec  1 03:12:42 np0005540697 kernel: ... version:                0
Dec  1 03:12:42 np0005540697 kernel: ... bit width:              48
Dec  1 03:12:42 np0005540697 kernel: ... generic registers:      6
Dec  1 03:12:42 np0005540697 kernel: ... value mask:             0000ffffffffffff
Dec  1 03:12:42 np0005540697 kernel: ... max period:             00007fffffffffff
Dec  1 03:12:42 np0005540697 kernel: ... fixed-purpose events:   0
Dec  1 03:12:42 np0005540697 kernel: ... event mask:             000000000000003f
Dec  1 03:12:42 np0005540697 kernel: signal: max sigframe size: 1776
Dec  1 03:12:42 np0005540697 kernel: rcu: Hierarchical SRCU implementation.
Dec  1 03:12:42 np0005540697 kernel: rcu: #011Max phase no-delay instances is 400.
Dec  1 03:12:42 np0005540697 kernel: smp: Bringing up secondary CPUs ...
Dec  1 03:12:42 np0005540697 kernel: smpboot: x86: Booting SMP configuration:
Dec  1 03:12:42 np0005540697 kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Dec  1 03:12:42 np0005540697 kernel: smp: Brought up 1 node, 8 CPUs
Dec  1 03:12:42 np0005540697 kernel: smpboot: Total of 8 processors activated (44799.96 BogoMIPS)
Dec  1 03:12:42 np0005540697 kernel: node 0 deferred pages initialised in 10ms
Dec  1 03:12:42 np0005540697 kernel: Memory: 7765924K/8388068K available (16384K kernel code, 5787K rwdata, 13900K rodata, 4192K init, 7172K bss, 616268K reserved, 0K cma-reserved)
Dec  1 03:12:42 np0005540697 kernel: devtmpfs: initialized
Dec  1 03:12:42 np0005540697 kernel: x86/mm: Memory block size: 128MB
Dec  1 03:12:42 np0005540697 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Dec  1 03:12:42 np0005540697 kernel: futex hash table entries: 2048 (order: 5, 131072 bytes, linear)
Dec  1 03:12:42 np0005540697 kernel: pinctrl core: initialized pinctrl subsystem
Dec  1 03:12:42 np0005540697 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Dec  1 03:12:42 np0005540697 kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Dec  1 03:12:42 np0005540697 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Dec  1 03:12:42 np0005540697 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Dec  1 03:12:42 np0005540697 kernel: audit: initializing netlink subsys (disabled)
Dec  1 03:12:42 np0005540697 kernel: audit: type=2000 audit(1764576759.803:1): state=initialized audit_enabled=0 res=1
Dec  1 03:12:42 np0005540697 kernel: thermal_sys: Registered thermal governor 'fair_share'
Dec  1 03:12:42 np0005540697 kernel: thermal_sys: Registered thermal governor 'step_wise'
Dec  1 03:12:42 np0005540697 kernel: thermal_sys: Registered thermal governor 'user_space'
Dec  1 03:12:42 np0005540697 kernel: cpuidle: using governor menu
Dec  1 03:12:42 np0005540697 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Dec  1 03:12:42 np0005540697 kernel: PCI: Using configuration type 1 for base access
Dec  1 03:12:42 np0005540697 kernel: PCI: Using configuration type 1 for extended access
Dec  1 03:12:42 np0005540697 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Dec  1 03:12:42 np0005540697 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Dec  1 03:12:42 np0005540697 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Dec  1 03:12:42 np0005540697 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Dec  1 03:12:42 np0005540697 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Dec  1 03:12:42 np0005540697 kernel: Demotion targets for Node 0: null
Dec  1 03:12:42 np0005540697 kernel: cryptd: max_cpu_qlen set to 1000
Dec  1 03:12:42 np0005540697 kernel: ACPI: Added _OSI(Module Device)
Dec  1 03:12:42 np0005540697 kernel: ACPI: Added _OSI(Processor Device)
Dec  1 03:12:42 np0005540697 kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Dec  1 03:12:42 np0005540697 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Dec  1 03:12:42 np0005540697 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Dec  1 03:12:42 np0005540697 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Dec  1 03:12:42 np0005540697 kernel: ACPI: Interpreter enabled
Dec  1 03:12:42 np0005540697 kernel: ACPI: PM: (supports S0 S3 S4 S5)
Dec  1 03:12:42 np0005540697 kernel: ACPI: Using IOAPIC for interrupt routing
Dec  1 03:12:42 np0005540697 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Dec  1 03:12:42 np0005540697 kernel: PCI: Using E820 reservations for host bridge windows
Dec  1 03:12:42 np0005540697 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Dec  1 03:12:42 np0005540697 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Dec  1 03:12:42 np0005540697 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Dec  1 03:12:42 np0005540697 kernel: acpiphp: Slot [3] registered
Dec  1 03:12:42 np0005540697 kernel: acpiphp: Slot [4] registered
Dec  1 03:12:42 np0005540697 kernel: acpiphp: Slot [5] registered
Dec  1 03:12:42 np0005540697 kernel: acpiphp: Slot [6] registered
Dec  1 03:12:42 np0005540697 kernel: acpiphp: Slot [7] registered
Dec  1 03:12:42 np0005540697 kernel: acpiphp: Slot [8] registered
Dec  1 03:12:42 np0005540697 kernel: acpiphp: Slot [9] registered
Dec  1 03:12:42 np0005540697 kernel: acpiphp: Slot [10] registered
Dec  1 03:12:42 np0005540697 kernel: acpiphp: Slot [11] registered
Dec  1 03:12:42 np0005540697 kernel: acpiphp: Slot [12] registered
Dec  1 03:12:42 np0005540697 kernel: acpiphp: Slot [13] registered
Dec  1 03:12:42 np0005540697 kernel: acpiphp: Slot [14] registered
Dec  1 03:12:42 np0005540697 kernel: acpiphp: Slot [15] registered
Dec  1 03:12:42 np0005540697 kernel: acpiphp: Slot [16] registered
Dec  1 03:12:42 np0005540697 kernel: acpiphp: Slot [17] registered
Dec  1 03:12:42 np0005540697 kernel: acpiphp: Slot [18] registered
Dec  1 03:12:42 np0005540697 kernel: acpiphp: Slot [19] registered
Dec  1 03:12:42 np0005540697 kernel: acpiphp: Slot [20] registered
Dec  1 03:12:42 np0005540697 kernel: acpiphp: Slot [21] registered
Dec  1 03:12:42 np0005540697 kernel: acpiphp: Slot [22] registered
Dec  1 03:12:42 np0005540697 kernel: acpiphp: Slot [23] registered
Dec  1 03:12:42 np0005540697 kernel: acpiphp: Slot [24] registered
Dec  1 03:12:42 np0005540697 kernel: acpiphp: Slot [25] registered
Dec  1 03:12:42 np0005540697 kernel: acpiphp: Slot [26] registered
Dec  1 03:12:42 np0005540697 kernel: acpiphp: Slot [27] registered
Dec  1 03:12:42 np0005540697 kernel: acpiphp: Slot [28] registered
Dec  1 03:12:42 np0005540697 kernel: acpiphp: Slot [29] registered
Dec  1 03:12:42 np0005540697 kernel: acpiphp: Slot [30] registered
Dec  1 03:12:42 np0005540697 kernel: acpiphp: Slot [31] registered
Dec  1 03:12:42 np0005540697 kernel: PCI host bridge to bus 0000:00
Dec  1 03:12:42 np0005540697 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Dec  1 03:12:42 np0005540697 kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Dec  1 03:12:42 np0005540697 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Dec  1 03:12:42 np0005540697 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Dec  1 03:12:42 np0005540697 kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Dec  1 03:12:42 np0005540697 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Dec  1 03:12:42 np0005540697 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Dec  1 03:12:42 np0005540697 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Dec  1 03:12:42 np0005540697 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Dec  1 03:12:42 np0005540697 kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Dec  1 03:12:42 np0005540697 kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Dec  1 03:12:42 np0005540697 kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Dec  1 03:12:42 np0005540697 kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Dec  1 03:12:42 np0005540697 kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Dec  1 03:12:42 np0005540697 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Dec  1 03:12:42 np0005540697 kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Dec  1 03:12:42 np0005540697 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Dec  1 03:12:42 np0005540697 kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Dec  1 03:12:42 np0005540697 kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Dec  1 03:12:42 np0005540697 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Dec  1 03:12:42 np0005540697 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Dec  1 03:12:42 np0005540697 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Dec  1 03:12:42 np0005540697 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Dec  1 03:12:42 np0005540697 kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Dec  1 03:12:42 np0005540697 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Dec  1 03:12:42 np0005540697 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Dec  1 03:12:42 np0005540697 kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Dec  1 03:12:42 np0005540697 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Dec  1 03:12:42 np0005540697 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Dec  1 03:12:42 np0005540697 kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Dec  1 03:12:42 np0005540697 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Dec  1 03:12:42 np0005540697 kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Dec  1 03:12:42 np0005540697 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Dec  1 03:12:42 np0005540697 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Dec  1 03:12:42 np0005540697 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Dec  1 03:12:42 np0005540697 kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Dec  1 03:12:42 np0005540697 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Dec  1 03:12:42 np0005540697 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Dec  1 03:12:42 np0005540697 kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Dec  1 03:12:42 np0005540697 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Dec  1 03:12:42 np0005540697 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Dec  1 03:12:42 np0005540697 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Dec  1 03:12:42 np0005540697 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Dec  1 03:12:42 np0005540697 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Dec  1 03:12:42 np0005540697 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Dec  1 03:12:42 np0005540697 kernel: iommu: Default domain type: Translated
Dec  1 03:12:42 np0005540697 kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Dec  1 03:12:42 np0005540697 kernel: SCSI subsystem initialized
Dec  1 03:12:42 np0005540697 kernel: ACPI: bus type USB registered
Dec  1 03:12:42 np0005540697 kernel: usbcore: registered new interface driver usbfs
Dec  1 03:12:42 np0005540697 kernel: usbcore: registered new interface driver hub
Dec  1 03:12:42 np0005540697 kernel: usbcore: registered new device driver usb
Dec  1 03:12:42 np0005540697 kernel: pps_core: LinuxPPS API ver. 1 registered
Dec  1 03:12:42 np0005540697 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Dec  1 03:12:42 np0005540697 kernel: PTP clock support registered
Dec  1 03:12:42 np0005540697 kernel: EDAC MC: Ver: 3.0.0
Dec  1 03:12:42 np0005540697 kernel: NetLabel: Initializing
Dec  1 03:12:42 np0005540697 kernel: NetLabel:  domain hash size = 128
Dec  1 03:12:42 np0005540697 kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Dec  1 03:12:42 np0005540697 kernel: NetLabel:  unlabeled traffic allowed by default
Dec  1 03:12:42 np0005540697 kernel: PCI: Using ACPI for IRQ routing
Dec  1 03:12:42 np0005540697 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Dec  1 03:12:42 np0005540697 kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Dec  1 03:12:42 np0005540697 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Dec  1 03:12:42 np0005540697 kernel: vgaarb: loaded
Dec  1 03:12:42 np0005540697 kernel: clocksource: Switched to clocksource kvm-clock
Dec  1 03:12:42 np0005540697 kernel: VFS: Disk quotas dquot_6.6.0
Dec  1 03:12:42 np0005540697 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Dec  1 03:12:42 np0005540697 kernel: pnp: PnP ACPI init
Dec  1 03:12:42 np0005540697 kernel: pnp: PnP ACPI: found 5 devices
Dec  1 03:12:42 np0005540697 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Dec  1 03:12:42 np0005540697 kernel: NET: Registered PF_INET protocol family
Dec  1 03:12:42 np0005540697 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Dec  1 03:12:42 np0005540697 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Dec  1 03:12:42 np0005540697 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Dec  1 03:12:42 np0005540697 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Dec  1 03:12:42 np0005540697 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Dec  1 03:12:42 np0005540697 kernel: TCP: Hash tables configured (established 65536 bind 65536)
Dec  1 03:12:42 np0005540697 kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Dec  1 03:12:42 np0005540697 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Dec  1 03:12:42 np0005540697 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Dec  1 03:12:42 np0005540697 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Dec  1 03:12:42 np0005540697 kernel: NET: Registered PF_XDP protocol family
Dec  1 03:12:42 np0005540697 kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Dec  1 03:12:42 np0005540697 kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Dec  1 03:12:42 np0005540697 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Dec  1 03:12:42 np0005540697 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Dec  1 03:12:42 np0005540697 kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Dec  1 03:12:42 np0005540697 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Dec  1 03:12:42 np0005540697 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Dec  1 03:12:42 np0005540697 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Dec  1 03:12:42 np0005540697 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 92248 usecs
Dec  1 03:12:42 np0005540697 kernel: PCI: CLS 0 bytes, default 64
Dec  1 03:12:42 np0005540697 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Dec  1 03:12:42 np0005540697 kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Dec  1 03:12:42 np0005540697 kernel: Trying to unpack rootfs image as initramfs...
Dec  1 03:12:42 np0005540697 kernel: ACPI: bus type thunderbolt registered
Dec  1 03:12:42 np0005540697 kernel: Initialise system trusted keyrings
Dec  1 03:12:42 np0005540697 kernel: Key type blacklist registered
Dec  1 03:12:42 np0005540697 kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Dec  1 03:12:42 np0005540697 kernel: zbud: loaded
Dec  1 03:12:42 np0005540697 kernel: integrity: Platform Keyring initialized
Dec  1 03:12:42 np0005540697 kernel: integrity: Machine keyring initialized
Dec  1 03:12:42 np0005540697 kernel: Freeing initrd memory: 85868K
Dec  1 03:12:42 np0005540697 kernel: NET: Registered PF_ALG protocol family
Dec  1 03:12:42 np0005540697 kernel: xor: automatically using best checksumming function   avx       
Dec  1 03:12:42 np0005540697 kernel: Key type asymmetric registered
Dec  1 03:12:42 np0005540697 kernel: Asymmetric key parser 'x509' registered
Dec  1 03:12:42 np0005540697 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Dec  1 03:12:42 np0005540697 kernel: io scheduler mq-deadline registered
Dec  1 03:12:42 np0005540697 kernel: io scheduler kyber registered
Dec  1 03:12:42 np0005540697 kernel: io scheduler bfq registered
Dec  1 03:12:42 np0005540697 kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Dec  1 03:12:42 np0005540697 kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Dec  1 03:12:42 np0005540697 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Dec  1 03:12:42 np0005540697 kernel: ACPI: button: Power Button [PWRF]
Dec  1 03:12:42 np0005540697 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Dec  1 03:12:42 np0005540697 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Dec  1 03:12:42 np0005540697 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Dec  1 03:12:42 np0005540697 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Dec  1 03:12:42 np0005540697 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Dec  1 03:12:42 np0005540697 kernel: Non-volatile memory driver v1.3
Dec  1 03:12:42 np0005540697 kernel: rdac: device handler registered
Dec  1 03:12:42 np0005540697 kernel: hp_sw: device handler registered
Dec  1 03:12:42 np0005540697 kernel: emc: device handler registered
Dec  1 03:12:42 np0005540697 kernel: alua: device handler registered
Dec  1 03:12:42 np0005540697 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Dec  1 03:12:42 np0005540697 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Dec  1 03:12:42 np0005540697 kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Dec  1 03:12:42 np0005540697 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Dec  1 03:12:42 np0005540697 kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Dec  1 03:12:42 np0005540697 kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Dec  1 03:12:42 np0005540697 kernel: usb usb1: Product: UHCI Host Controller
Dec  1 03:12:42 np0005540697 kernel: usb usb1: Manufacturer: Linux 5.14.0-642.el9.x86_64 uhci_hcd
Dec  1 03:12:42 np0005540697 kernel: usb usb1: SerialNumber: 0000:00:01.2
Dec  1 03:12:42 np0005540697 kernel: hub 1-0:1.0: USB hub found
Dec  1 03:12:42 np0005540697 kernel: hub 1-0:1.0: 2 ports detected
Dec  1 03:12:42 np0005540697 kernel: usbcore: registered new interface driver usbserial_generic
Dec  1 03:12:42 np0005540697 kernel: usbserial: USB Serial support registered for generic
Dec  1 03:12:42 np0005540697 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Dec  1 03:12:42 np0005540697 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Dec  1 03:12:42 np0005540697 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Dec  1 03:12:42 np0005540697 kernel: mousedev: PS/2 mouse device common for all mice
Dec  1 03:12:42 np0005540697 kernel: rtc_cmos 00:04: RTC can wake from S4
Dec  1 03:12:42 np0005540697 kernel: rtc_cmos 00:04: registered as rtc0
Dec  1 03:12:42 np0005540697 kernel: rtc_cmos 00:04: setting system clock to 2025-12-01T08:12:41 UTC (1764576761)
Dec  1 03:12:42 np0005540697 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Dec  1 03:12:42 np0005540697 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Dec  1 03:12:42 np0005540697 kernel: hid: raw HID events driver (C) Jiri Kosina
Dec  1 03:12:42 np0005540697 kernel: usbcore: registered new interface driver usbhid
Dec  1 03:12:42 np0005540697 kernel: usbhid: USB HID core driver
Dec  1 03:12:42 np0005540697 kernel: drop_monitor: Initializing network drop monitor service
Dec  1 03:12:42 np0005540697 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Dec  1 03:12:42 np0005540697 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Dec  1 03:12:42 np0005540697 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Dec  1 03:12:42 np0005540697 kernel: Initializing XFRM netlink socket
Dec  1 03:12:42 np0005540697 kernel: NET: Registered PF_INET6 protocol family
Dec  1 03:12:42 np0005540697 kernel: Segment Routing with IPv6
Dec  1 03:12:42 np0005540697 kernel: NET: Registered PF_PACKET protocol family
Dec  1 03:12:42 np0005540697 kernel: mpls_gso: MPLS GSO support
Dec  1 03:12:42 np0005540697 kernel: IPI shorthand broadcast: enabled
Dec  1 03:12:42 np0005540697 kernel: AVX2 version of gcm_enc/dec engaged.
Dec  1 03:12:42 np0005540697 kernel: AES CTR mode by8 optimization enabled
Dec  1 03:12:42 np0005540697 kernel: sched_clock: Marking stable (1299005261, 160593094)->(1540723670, -81125315)
Dec  1 03:12:42 np0005540697 kernel: registered taskstats version 1
Dec  1 03:12:42 np0005540697 kernel: Loading compiled-in X.509 certificates
Dec  1 03:12:42 np0005540697 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 8ec4bd273f582f9a9b9a494ae677ca1f1488f19e'
Dec  1 03:12:42 np0005540697 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Dec  1 03:12:42 np0005540697 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Dec  1 03:12:42 np0005540697 kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Dec  1 03:12:42 np0005540697 kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Dec  1 03:12:42 np0005540697 kernel: Demotion targets for Node 0: null
Dec  1 03:12:42 np0005540697 kernel: page_owner is disabled
Dec  1 03:12:42 np0005540697 kernel: Key type .fscrypt registered
Dec  1 03:12:42 np0005540697 kernel: Key type fscrypt-provisioning registered
Dec  1 03:12:42 np0005540697 kernel: Key type big_key registered
Dec  1 03:12:42 np0005540697 kernel: Key type encrypted registered
Dec  1 03:12:42 np0005540697 kernel: ima: No TPM chip found, activating TPM-bypass!
Dec  1 03:12:42 np0005540697 kernel: Loading compiled-in module X.509 certificates
Dec  1 03:12:42 np0005540697 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 8ec4bd273f582f9a9b9a494ae677ca1f1488f19e'
Dec  1 03:12:42 np0005540697 kernel: ima: Allocated hash algorithm: sha256
Dec  1 03:12:42 np0005540697 kernel: ima: No architecture policies found
Dec  1 03:12:42 np0005540697 kernel: evm: Initialising EVM extended attributes:
Dec  1 03:12:42 np0005540697 kernel: evm: security.selinux
Dec  1 03:12:42 np0005540697 kernel: evm: security.SMACK64 (disabled)
Dec  1 03:12:42 np0005540697 kernel: evm: security.SMACK64EXEC (disabled)
Dec  1 03:12:42 np0005540697 kernel: evm: security.SMACK64TRANSMUTE (disabled)
Dec  1 03:12:42 np0005540697 kernel: evm: security.SMACK64MMAP (disabled)
Dec  1 03:12:42 np0005540697 kernel: evm: security.apparmor (disabled)
Dec  1 03:12:42 np0005540697 kernel: evm: security.ima
Dec  1 03:12:42 np0005540697 kernel: evm: security.capability
Dec  1 03:12:42 np0005540697 kernel: evm: HMAC attrs: 0x1
Dec  1 03:12:42 np0005540697 kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Dec  1 03:12:42 np0005540697 kernel: Running certificate verification RSA selftest
Dec  1 03:12:42 np0005540697 kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Dec  1 03:12:42 np0005540697 kernel: Running certificate verification ECDSA selftest
Dec  1 03:12:42 np0005540697 kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Dec  1 03:12:42 np0005540697 kernel: clk: Disabling unused clocks
Dec  1 03:12:42 np0005540697 kernel: Freeing unused decrypted memory: 2028K
Dec  1 03:12:42 np0005540697 kernel: Freeing unused kernel image (initmem) memory: 4192K
Dec  1 03:12:42 np0005540697 kernel: Write protecting the kernel read-only data: 30720k
Dec  1 03:12:42 np0005540697 kernel: Freeing unused kernel image (rodata/data gap) memory: 436K
Dec  1 03:12:42 np0005540697 kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Dec  1 03:12:42 np0005540697 kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Dec  1 03:12:42 np0005540697 kernel: usb 1-1: Product: QEMU USB Tablet
Dec  1 03:12:42 np0005540697 kernel: usb 1-1: Manufacturer: QEMU
Dec  1 03:12:42 np0005540697 kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Dec  1 03:12:42 np0005540697 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Dec  1 03:12:42 np0005540697 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Dec  1 03:12:42 np0005540697 kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Dec  1 03:12:42 np0005540697 kernel: Run /init as init process
Dec  1 03:12:42 np0005540697 systemd: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Dec  1 03:12:42 np0005540697 systemd: Detected virtualization kvm.
Dec  1 03:12:42 np0005540697 systemd: Detected architecture x86-64.
Dec  1 03:12:42 np0005540697 systemd: Running in initrd.
Dec  1 03:12:42 np0005540697 systemd: No hostname configured, using default hostname.
Dec  1 03:12:42 np0005540697 systemd: Hostname set to <localhost>.
Dec  1 03:12:42 np0005540697 systemd: Initializing machine ID from VM UUID.
Dec  1 03:12:42 np0005540697 systemd: Queued start job for default target Initrd Default Target.
Dec  1 03:12:42 np0005540697 systemd: Started Dispatch Password Requests to Console Directory Watch.
Dec  1 03:12:42 np0005540697 systemd: Reached target Local Encrypted Volumes.
Dec  1 03:12:42 np0005540697 systemd: Reached target Initrd /usr File System.
Dec  1 03:12:42 np0005540697 systemd: Reached target Local File Systems.
Dec  1 03:12:42 np0005540697 systemd: Reached target Path Units.
Dec  1 03:12:42 np0005540697 systemd: Reached target Slice Units.
Dec  1 03:12:42 np0005540697 systemd: Reached target Swaps.
Dec  1 03:12:42 np0005540697 systemd: Reached target Timer Units.
Dec  1 03:12:42 np0005540697 systemd: Listening on D-Bus System Message Bus Socket.
Dec  1 03:12:42 np0005540697 systemd: Listening on Journal Socket (/dev/log).
Dec  1 03:12:42 np0005540697 systemd: Listening on Journal Socket.
Dec  1 03:12:42 np0005540697 systemd: Listening on udev Control Socket.
Dec  1 03:12:42 np0005540697 systemd: Listening on udev Kernel Socket.
Dec  1 03:12:42 np0005540697 systemd: Reached target Socket Units.
Dec  1 03:12:42 np0005540697 systemd: Starting Create List of Static Device Nodes...
Dec  1 03:12:42 np0005540697 systemd: Starting Journal Service...
Dec  1 03:12:42 np0005540697 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Dec  1 03:12:42 np0005540697 systemd: Starting Apply Kernel Variables...
Dec  1 03:12:42 np0005540697 systemd: Starting Create System Users...
Dec  1 03:12:42 np0005540697 systemd: Starting Setup Virtual Console...
Dec  1 03:12:42 np0005540697 systemd: Finished Create List of Static Device Nodes.
Dec  1 03:12:42 np0005540697 systemd: Finished Apply Kernel Variables.
Dec  1 03:12:42 np0005540697 systemd: Finished Create System Users.
Dec  1 03:12:42 np0005540697 systemd-journald[303]: Journal started
Dec  1 03:12:42 np0005540697 systemd-journald[303]: Runtime Journal (/run/log/journal/8504d282d8be435b9f17042283c7909f) is 8.0M, max 153.6M, 145.6M free.
Dec  1 03:12:42 np0005540697 systemd-sysusers[307]: Creating group 'users' with GID 100.
Dec  1 03:12:42 np0005540697 systemd-sysusers[307]: Creating group 'dbus' with GID 81.
Dec  1 03:12:42 np0005540697 systemd-sysusers[307]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Dec  1 03:12:42 np0005540697 systemd: Started Journal Service.
Dec  1 03:12:42 np0005540697 systemd[1]: Starting Create Static Device Nodes in /dev...
Dec  1 03:12:42 np0005540697 systemd[1]: Starting Create Volatile Files and Directories...
Dec  1 03:12:42 np0005540697 systemd[1]: Finished Create Static Device Nodes in /dev.
Dec  1 03:12:42 np0005540697 systemd[1]: Finished Create Volatile Files and Directories.
Dec  1 03:12:42 np0005540697 systemd[1]: Finished Setup Virtual Console.
Dec  1 03:12:42 np0005540697 systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Dec  1 03:12:42 np0005540697 systemd[1]: Starting dracut cmdline hook...
Dec  1 03:12:42 np0005540697 dracut-cmdline[323]: dracut-9 dracut-057-102.git20250818.el9
Dec  1 03:12:42 np0005540697 dracut-cmdline[323]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64 root=UUID=b277050f-8ace-464d-abb6-4c46d4c45253 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Dec  1 03:12:42 np0005540697 systemd[1]: Finished dracut cmdline hook.
Dec  1 03:12:42 np0005540697 systemd[1]: Starting dracut pre-udev hook...
Dec  1 03:12:42 np0005540697 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Dec  1 03:12:42 np0005540697 kernel: device-mapper: uevent: version 1.0.3
Dec  1 03:12:42 np0005540697 kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Dec  1 03:12:42 np0005540697 kernel: RPC: Registered named UNIX socket transport module.
Dec  1 03:12:42 np0005540697 kernel: RPC: Registered udp transport module.
Dec  1 03:12:42 np0005540697 kernel: RPC: Registered tcp transport module.
Dec  1 03:12:42 np0005540697 kernel: RPC: Registered tcp-with-tls transport module.
Dec  1 03:12:42 np0005540697 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Dec  1 03:12:42 np0005540697 rpc.statd[440]: Version 2.5.4 starting
Dec  1 03:12:42 np0005540697 rpc.statd[440]: Initializing NSM state
Dec  1 03:12:43 np0005540697 rpc.idmapd[445]: Setting log level to 0
Dec  1 03:12:43 np0005540697 systemd[1]: Finished dracut pre-udev hook.
Dec  1 03:12:43 np0005540697 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Dec  1 03:12:43 np0005540697 systemd-udevd[458]: Using default interface naming scheme 'rhel-9.0'.
Dec  1 03:12:43 np0005540697 systemd[1]: Started Rule-based Manager for Device Events and Files.
Dec  1 03:12:43 np0005540697 systemd[1]: Starting dracut pre-trigger hook...
Dec  1 03:12:43 np0005540697 systemd[1]: Finished dracut pre-trigger hook.
Dec  1 03:12:43 np0005540697 systemd[1]: Starting Coldplug All udev Devices...
Dec  1 03:12:43 np0005540697 systemd[1]: Created slice Slice /system/modprobe.
Dec  1 03:12:43 np0005540697 systemd[1]: Starting Load Kernel Module configfs...
Dec  1 03:12:43 np0005540697 systemd[1]: Finished Coldplug All udev Devices.
Dec  1 03:12:43 np0005540697 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec  1 03:12:43 np0005540697 systemd[1]: Finished Load Kernel Module configfs.
Dec  1 03:12:43 np0005540697 systemd[1]: Mounting Kernel Configuration File System...
Dec  1 03:12:43 np0005540697 systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Dec  1 03:12:43 np0005540697 systemd[1]: Reached target Network.
Dec  1 03:12:43 np0005540697 systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Dec  1 03:12:43 np0005540697 systemd[1]: Starting dracut initqueue hook...
Dec  1 03:12:43 np0005540697 systemd[1]: Mounted Kernel Configuration File System.
Dec  1 03:12:43 np0005540697 systemd[1]: Reached target System Initialization.
Dec  1 03:12:43 np0005540697 systemd[1]: Reached target Basic System.
Dec  1 03:12:43 np0005540697 kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Dec  1 03:12:43 np0005540697 kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Dec  1 03:12:43 np0005540697 kernel: vda: vda1
Dec  1 03:12:43 np0005540697 systemd-udevd[483]: Network interface NamePolicy= disabled on kernel command line.
Dec  1 03:12:43 np0005540697 kernel: scsi host0: ata_piix
Dec  1 03:12:43 np0005540697 kernel: scsi host1: ata_piix
Dec  1 03:12:43 np0005540697 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Dec  1 03:12:43 np0005540697 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Dec  1 03:12:43 np0005540697 systemd[1]: Found device /dev/disk/by-uuid/b277050f-8ace-464d-abb6-4c46d4c45253.
Dec  1 03:12:43 np0005540697 systemd[1]: Reached target Initrd Root Device.
Dec  1 03:12:43 np0005540697 kernel: ata1: found unknown device (class 0)
Dec  1 03:12:43 np0005540697 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Dec  1 03:12:43 np0005540697 kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Dec  1 03:12:43 np0005540697 kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Dec  1 03:12:43 np0005540697 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Dec  1 03:12:43 np0005540697 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Dec  1 03:12:43 np0005540697 systemd[1]: Finished dracut initqueue hook.
Dec  1 03:12:43 np0005540697 systemd[1]: Reached target Preparation for Remote File Systems.
Dec  1 03:12:43 np0005540697 systemd[1]: Reached target Remote Encrypted Volumes.
Dec  1 03:12:43 np0005540697 systemd[1]: Reached target Remote File Systems.
Dec  1 03:12:43 np0005540697 systemd[1]: Starting dracut pre-mount hook...
Dec  1 03:12:43 np0005540697 systemd[1]: Finished dracut pre-mount hook.
Dec  1 03:12:43 np0005540697 systemd[1]: Starting File System Check on /dev/disk/by-uuid/b277050f-8ace-464d-abb6-4c46d4c45253...
Dec  1 03:12:43 np0005540697 systemd-fsck[553]: /usr/sbin/fsck.xfs: XFS file system.
Dec  1 03:12:43 np0005540697 systemd[1]: Finished File System Check on /dev/disk/by-uuid/b277050f-8ace-464d-abb6-4c46d4c45253.
Dec  1 03:12:43 np0005540697 systemd[1]: Mounting /sysroot...
Dec  1 03:12:44 np0005540697 kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Dec  1 03:12:44 np0005540697 kernel: XFS (vda1): Mounting V5 Filesystem b277050f-8ace-464d-abb6-4c46d4c45253
Dec  1 03:12:44 np0005540697 kernel: XFS (vda1): Ending clean mount
Dec  1 03:12:44 np0005540697 systemd[1]: Mounted /sysroot.
Dec  1 03:12:44 np0005540697 systemd[1]: Reached target Initrd Root File System.
Dec  1 03:12:44 np0005540697 systemd[1]: Starting Mountpoints Configured in the Real Root...
Dec  1 03:12:44 np0005540697 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Dec  1 03:12:44 np0005540697 systemd[1]: Finished Mountpoints Configured in the Real Root.
Dec  1 03:12:44 np0005540697 systemd[1]: Reached target Initrd File Systems.
Dec  1 03:12:44 np0005540697 systemd[1]: Reached target Initrd Default Target.
Dec  1 03:12:44 np0005540697 systemd[1]: Starting dracut mount hook...
Dec  1 03:12:44 np0005540697 systemd[1]: Finished dracut mount hook.
Dec  1 03:12:44 np0005540697 systemd[1]: Starting dracut pre-pivot and cleanup hook...
Dec  1 03:12:44 np0005540697 rpc.idmapd[445]: exiting on signal 15
Dec  1 03:12:44 np0005540697 systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Dec  1 03:12:44 np0005540697 systemd[1]: Finished dracut pre-pivot and cleanup hook.
Dec  1 03:12:44 np0005540697 systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Dec  1 03:12:44 np0005540697 systemd[1]: Stopped target Network.
Dec  1 03:12:44 np0005540697 systemd[1]: Stopped target Remote Encrypted Volumes.
Dec  1 03:12:44 np0005540697 systemd[1]: Stopped target Timer Units.
Dec  1 03:12:44 np0005540697 systemd[1]: dbus.socket: Deactivated successfully.
Dec  1 03:12:44 np0005540697 systemd[1]: Closed D-Bus System Message Bus Socket.
Dec  1 03:12:44 np0005540697 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Dec  1 03:12:44 np0005540697 systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Dec  1 03:12:44 np0005540697 systemd[1]: Stopped target Initrd Default Target.
Dec  1 03:12:44 np0005540697 systemd[1]: Stopped target Basic System.
Dec  1 03:12:44 np0005540697 systemd[1]: Stopped target Initrd Root Device.
Dec  1 03:12:44 np0005540697 systemd[1]: Stopped target Initrd /usr File System.
Dec  1 03:12:44 np0005540697 systemd[1]: Stopped target Path Units.
Dec  1 03:12:44 np0005540697 systemd[1]: Stopped target Remote File Systems.
Dec  1 03:12:44 np0005540697 systemd[1]: Stopped target Preparation for Remote File Systems.
Dec  1 03:12:44 np0005540697 systemd[1]: Stopped target Slice Units.
Dec  1 03:12:44 np0005540697 systemd[1]: Stopped target Socket Units.
Dec  1 03:12:44 np0005540697 systemd[1]: Stopped target System Initialization.
Dec  1 03:12:44 np0005540697 systemd[1]: Stopped target Local File Systems.
Dec  1 03:12:44 np0005540697 systemd[1]: Stopped target Swaps.
Dec  1 03:12:44 np0005540697 systemd[1]: dracut-mount.service: Deactivated successfully.
Dec  1 03:12:44 np0005540697 systemd[1]: Stopped dracut mount hook.
Dec  1 03:12:44 np0005540697 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Dec  1 03:12:44 np0005540697 systemd[1]: Stopped dracut pre-mount hook.
Dec  1 03:12:44 np0005540697 systemd[1]: Stopped target Local Encrypted Volumes.
Dec  1 03:12:44 np0005540697 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Dec  1 03:12:44 np0005540697 systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Dec  1 03:12:44 np0005540697 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Dec  1 03:12:44 np0005540697 systemd[1]: Stopped dracut initqueue hook.
Dec  1 03:12:44 np0005540697 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Dec  1 03:12:44 np0005540697 systemd[1]: Stopped Apply Kernel Variables.
Dec  1 03:12:44 np0005540697 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Dec  1 03:12:44 np0005540697 systemd[1]: Stopped Create Volatile Files and Directories.
Dec  1 03:12:44 np0005540697 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Dec  1 03:12:44 np0005540697 systemd[1]: Stopped Coldplug All udev Devices.
Dec  1 03:12:44 np0005540697 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Dec  1 03:12:44 np0005540697 systemd[1]: Stopped dracut pre-trigger hook.
Dec  1 03:12:44 np0005540697 systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Dec  1 03:12:44 np0005540697 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Dec  1 03:12:44 np0005540697 systemd[1]: Stopped Setup Virtual Console.
Dec  1 03:12:44 np0005540697 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Dec  1 03:12:44 np0005540697 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Dec  1 03:12:44 np0005540697 systemd[1]: systemd-udevd.service: Deactivated successfully.
Dec  1 03:12:44 np0005540697 systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Dec  1 03:12:44 np0005540697 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Dec  1 03:12:44 np0005540697 systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Dec  1 03:12:44 np0005540697 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Dec  1 03:12:44 np0005540697 systemd[1]: Closed udev Control Socket.
Dec  1 03:12:44 np0005540697 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Dec  1 03:12:44 np0005540697 systemd[1]: Closed udev Kernel Socket.
Dec  1 03:12:44 np0005540697 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Dec  1 03:12:44 np0005540697 systemd[1]: Stopped dracut pre-udev hook.
Dec  1 03:12:44 np0005540697 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Dec  1 03:12:44 np0005540697 systemd[1]: Stopped dracut cmdline hook.
Dec  1 03:12:44 np0005540697 systemd[1]: Starting Cleanup udev Database...
Dec  1 03:12:44 np0005540697 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Dec  1 03:12:44 np0005540697 systemd[1]: Stopped Create Static Device Nodes in /dev.
Dec  1 03:12:44 np0005540697 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Dec  1 03:12:44 np0005540697 systemd[1]: Stopped Create List of Static Device Nodes.
Dec  1 03:12:44 np0005540697 systemd[1]: systemd-sysusers.service: Deactivated successfully.
Dec  1 03:12:44 np0005540697 systemd[1]: Stopped Create System Users.
Dec  1 03:12:44 np0005540697 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Dec  1 03:12:44 np0005540697 systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Dec  1 03:12:44 np0005540697 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Dec  1 03:12:44 np0005540697 systemd[1]: Finished Cleanup udev Database.
Dec  1 03:12:44 np0005540697 systemd[1]: Reached target Switch Root.
Dec  1 03:12:44 np0005540697 systemd[1]: Starting Switch Root...
Dec  1 03:12:44 np0005540697 systemd[1]: Switching root.
Dec  1 03:12:44 np0005540697 systemd-journald[303]: Journal stopped
Dec  1 03:12:45 np0005540697 systemd-journald: Received SIGTERM from PID 1 (systemd).
Dec  1 03:12:45 np0005540697 kernel: audit: type=1404 audit(1764576764.882:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Dec  1 03:12:45 np0005540697 kernel: SELinux:  policy capability network_peer_controls=1
Dec  1 03:12:45 np0005540697 kernel: SELinux:  policy capability open_perms=1
Dec  1 03:12:45 np0005540697 kernel: SELinux:  policy capability extended_socket_class=1
Dec  1 03:12:45 np0005540697 kernel: SELinux:  policy capability always_check_network=0
Dec  1 03:12:45 np0005540697 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  1 03:12:45 np0005540697 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  1 03:12:45 np0005540697 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  1 03:12:45 np0005540697 kernel: audit: type=1403 audit(1764576765.010:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Dec  1 03:12:45 np0005540697 systemd: Successfully loaded SELinux policy in 130.740ms.
Dec  1 03:12:45 np0005540697 systemd: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 26.817ms.
Dec  1 03:12:45 np0005540697 systemd: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Dec  1 03:12:45 np0005540697 systemd: Detected virtualization kvm.
Dec  1 03:12:45 np0005540697 systemd: Detected architecture x86-64.
Dec  1 03:12:45 np0005540697 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 03:12:45 np0005540697 systemd: initrd-switch-root.service: Deactivated successfully.
Dec  1 03:12:45 np0005540697 systemd: Stopped Switch Root.
Dec  1 03:12:45 np0005540697 systemd: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Dec  1 03:12:45 np0005540697 systemd: Created slice Slice /system/getty.
Dec  1 03:12:45 np0005540697 systemd: Created slice Slice /system/serial-getty.
Dec  1 03:12:45 np0005540697 systemd: Created slice Slice /system/sshd-keygen.
Dec  1 03:12:45 np0005540697 systemd: Created slice User and Session Slice.
Dec  1 03:12:45 np0005540697 systemd: Started Dispatch Password Requests to Console Directory Watch.
Dec  1 03:12:45 np0005540697 systemd: Started Forward Password Requests to Wall Directory Watch.
Dec  1 03:12:45 np0005540697 systemd: Set up automount Arbitrary Executable File Formats File System Automount Point.
Dec  1 03:12:45 np0005540697 systemd: Reached target Local Encrypted Volumes.
Dec  1 03:12:45 np0005540697 systemd: Stopped target Switch Root.
Dec  1 03:12:45 np0005540697 systemd: Stopped target Initrd File Systems.
Dec  1 03:12:45 np0005540697 systemd: Stopped target Initrd Root File System.
Dec  1 03:12:45 np0005540697 systemd: Reached target Local Integrity Protected Volumes.
Dec  1 03:12:45 np0005540697 systemd: Reached target Path Units.
Dec  1 03:12:45 np0005540697 systemd: Reached target rpc_pipefs.target.
Dec  1 03:12:45 np0005540697 systemd: Reached target Slice Units.
Dec  1 03:12:45 np0005540697 systemd: Reached target Swaps.
Dec  1 03:12:45 np0005540697 systemd: Reached target Local Verity Protected Volumes.
Dec  1 03:12:45 np0005540697 systemd: Listening on RPCbind Server Activation Socket.
Dec  1 03:12:45 np0005540697 systemd: Reached target RPC Port Mapper.
Dec  1 03:12:45 np0005540697 systemd: Listening on Process Core Dump Socket.
Dec  1 03:12:45 np0005540697 systemd: Listening on initctl Compatibility Named Pipe.
Dec  1 03:12:45 np0005540697 systemd: Listening on udev Control Socket.
Dec  1 03:12:45 np0005540697 systemd: Listening on udev Kernel Socket.
Dec  1 03:12:45 np0005540697 systemd: Mounting Huge Pages File System...
Dec  1 03:12:45 np0005540697 systemd: Mounting POSIX Message Queue File System...
Dec  1 03:12:45 np0005540697 systemd: Mounting Kernel Debug File System...
Dec  1 03:12:45 np0005540697 systemd: Mounting Kernel Trace File System...
Dec  1 03:12:45 np0005540697 systemd: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Dec  1 03:12:45 np0005540697 systemd: Starting Create List of Static Device Nodes...
Dec  1 03:12:45 np0005540697 systemd: Starting Load Kernel Module configfs...
Dec  1 03:12:45 np0005540697 systemd: Starting Load Kernel Module drm...
Dec  1 03:12:45 np0005540697 systemd: Starting Load Kernel Module efi_pstore...
Dec  1 03:12:45 np0005540697 systemd: Starting Load Kernel Module fuse...
Dec  1 03:12:45 np0005540697 systemd: Starting Read and set NIS domainname from /etc/sysconfig/network...
Dec  1 03:12:45 np0005540697 systemd: systemd-fsck-root.service: Deactivated successfully.
Dec  1 03:12:45 np0005540697 systemd: Stopped File System Check on Root Device.
Dec  1 03:12:45 np0005540697 systemd: Stopped Journal Service.
Dec  1 03:12:45 np0005540697 kernel: fuse: init (API version 7.37)
Dec  1 03:12:45 np0005540697 systemd: Starting Journal Service...
Dec  1 03:12:45 np0005540697 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Dec  1 03:12:45 np0005540697 systemd: Starting Generate network units from Kernel command line...
Dec  1 03:12:45 np0005540697 systemd: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec  1 03:12:45 np0005540697 systemd: Starting Remount Root and Kernel File Systems...
Dec  1 03:12:45 np0005540697 systemd: Repartition Root Disk was skipped because no trigger condition checks were met.
Dec  1 03:12:45 np0005540697 systemd: Starting Apply Kernel Variables...
Dec  1 03:12:45 np0005540697 systemd-journald[679]: Journal started
Dec  1 03:12:45 np0005540697 systemd-journald[679]: Runtime Journal (/run/log/journal/1f988c78c563e12389ab342aced42dbb) is 8.0M, max 153.6M, 145.6M free.
Dec  1 03:12:45 np0005540697 systemd[1]: Queued start job for default target Multi-User System.
Dec  1 03:12:45 np0005540697 systemd[1]: systemd-journald.service: Deactivated successfully.
Dec  1 03:12:45 np0005540697 systemd: Starting Coldplug All udev Devices...
Dec  1 03:12:45 np0005540697 systemd: Started Journal Service.
Dec  1 03:12:45 np0005540697 kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Dec  1 03:12:45 np0005540697 systemd[1]: Mounted Huge Pages File System.
Dec  1 03:12:45 np0005540697 systemd[1]: Mounted POSIX Message Queue File System.
Dec  1 03:12:45 np0005540697 systemd[1]: Mounted Kernel Debug File System.
Dec  1 03:12:45 np0005540697 systemd[1]: Mounted Kernel Trace File System.
Dec  1 03:12:45 np0005540697 systemd[1]: Finished Create List of Static Device Nodes.
Dec  1 03:12:45 np0005540697 kernel: ACPI: bus type drm_connector registered
Dec  1 03:12:45 np0005540697 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec  1 03:12:45 np0005540697 systemd[1]: Finished Load Kernel Module configfs.
Dec  1 03:12:45 np0005540697 systemd[1]: modprobe@drm.service: Deactivated successfully.
Dec  1 03:12:45 np0005540697 systemd[1]: Finished Load Kernel Module drm.
Dec  1 03:12:45 np0005540697 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Dec  1 03:12:45 np0005540697 systemd[1]: Finished Load Kernel Module efi_pstore.
Dec  1 03:12:45 np0005540697 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Dec  1 03:12:45 np0005540697 systemd[1]: Finished Load Kernel Module fuse.
Dec  1 03:12:45 np0005540697 systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Dec  1 03:12:45 np0005540697 systemd[1]: Finished Generate network units from Kernel command line.
Dec  1 03:12:45 np0005540697 systemd[1]: Finished Remount Root and Kernel File Systems.
Dec  1 03:12:45 np0005540697 systemd[1]: Finished Apply Kernel Variables.
Dec  1 03:12:45 np0005540697 systemd[1]: Mounting FUSE Control File System...
Dec  1 03:12:45 np0005540697 systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Dec  1 03:12:45 np0005540697 systemd[1]: Starting Rebuild Hardware Database...
Dec  1 03:12:45 np0005540697 systemd[1]: Starting Flush Journal to Persistent Storage...
Dec  1 03:12:45 np0005540697 systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Dec  1 03:12:45 np0005540697 systemd[1]: Starting Load/Save OS Random Seed...
Dec  1 03:12:45 np0005540697 systemd[1]: Starting Create System Users...
Dec  1 03:12:45 np0005540697 systemd[1]: Mounted FUSE Control File System.
Dec  1 03:12:45 np0005540697 systemd-journald[679]: Runtime Journal (/run/log/journal/1f988c78c563e12389ab342aced42dbb) is 8.0M, max 153.6M, 145.6M free.
Dec  1 03:12:45 np0005540697 systemd-journald[679]: Received client request to flush runtime journal.
Dec  1 03:12:45 np0005540697 systemd[1]: Finished Flush Journal to Persistent Storage.
Dec  1 03:12:45 np0005540697 systemd[1]: Finished Load/Save OS Random Seed.
Dec  1 03:12:45 np0005540697 systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Dec  1 03:12:45 np0005540697 systemd[1]: Finished Create System Users.
Dec  1 03:12:45 np0005540697 systemd[1]: Starting Create Static Device Nodes in /dev...
Dec  1 03:12:45 np0005540697 systemd[1]: Finished Coldplug All udev Devices.
Dec  1 03:12:45 np0005540697 systemd[1]: Finished Create Static Device Nodes in /dev.
Dec  1 03:12:45 np0005540697 systemd[1]: Reached target Preparation for Local File Systems.
Dec  1 03:12:45 np0005540697 systemd[1]: Reached target Local File Systems.
Dec  1 03:12:45 np0005540697 systemd[1]: Starting Rebuild Dynamic Linker Cache...
Dec  1 03:12:45 np0005540697 systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Dec  1 03:12:45 np0005540697 systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Dec  1 03:12:45 np0005540697 systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Dec  1 03:12:45 np0005540697 systemd[1]: Starting Automatic Boot Loader Update...
Dec  1 03:12:45 np0005540697 systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Dec  1 03:12:45 np0005540697 systemd[1]: Starting Create Volatile Files and Directories...
Dec  1 03:12:45 np0005540697 bootctl[697]: Couldn't find EFI system partition, skipping.
Dec  1 03:12:45 np0005540697 systemd[1]: Finished Automatic Boot Loader Update.
Dec  1 03:12:45 np0005540697 systemd[1]: Finished Create Volatile Files and Directories.
Dec  1 03:12:45 np0005540697 systemd[1]: Starting Security Auditing Service...
Dec  1 03:12:45 np0005540697 systemd[1]: Starting RPC Bind...
Dec  1 03:12:45 np0005540697 systemd[1]: Starting Rebuild Journal Catalog...
Dec  1 03:12:45 np0005540697 systemd[1]: Finished Rebuild Dynamic Linker Cache.
Dec  1 03:12:45 np0005540697 auditd[703]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Dec  1 03:12:45 np0005540697 auditd[703]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Dec  1 03:12:45 np0005540697 systemd[1]: Started RPC Bind.
Dec  1 03:12:46 np0005540697 systemd[1]: Finished Rebuild Journal Catalog.
Dec  1 03:12:46 np0005540697 augenrules[708]: /sbin/augenrules: No change
Dec  1 03:12:46 np0005540697 augenrules[723]: No rules
Dec  1 03:12:46 np0005540697 augenrules[723]: enabled 1
Dec  1 03:12:46 np0005540697 augenrules[723]: failure 1
Dec  1 03:12:46 np0005540697 augenrules[723]: pid 703
Dec  1 03:12:46 np0005540697 augenrules[723]: rate_limit 0
Dec  1 03:12:46 np0005540697 augenrules[723]: backlog_limit 8192
Dec  1 03:12:46 np0005540697 augenrules[723]: lost 0
Dec  1 03:12:46 np0005540697 augenrules[723]: backlog 0
Dec  1 03:12:46 np0005540697 augenrules[723]: backlog_wait_time 60000
Dec  1 03:12:46 np0005540697 augenrules[723]: backlog_wait_time_actual 0
Dec  1 03:12:46 np0005540697 augenrules[723]: enabled 1
Dec  1 03:12:46 np0005540697 augenrules[723]: failure 1
Dec  1 03:12:46 np0005540697 augenrules[723]: pid 703
Dec  1 03:12:46 np0005540697 augenrules[723]: rate_limit 0
Dec  1 03:12:46 np0005540697 augenrules[723]: backlog_limit 8192
Dec  1 03:12:46 np0005540697 augenrules[723]: lost 0
Dec  1 03:12:46 np0005540697 augenrules[723]: backlog 0
Dec  1 03:12:46 np0005540697 augenrules[723]: backlog_wait_time 60000
Dec  1 03:12:46 np0005540697 augenrules[723]: backlog_wait_time_actual 0
Dec  1 03:12:46 np0005540697 augenrules[723]: enabled 1
Dec  1 03:12:46 np0005540697 augenrules[723]: failure 1
Dec  1 03:12:46 np0005540697 augenrules[723]: pid 703
Dec  1 03:12:46 np0005540697 augenrules[723]: rate_limit 0
Dec  1 03:12:46 np0005540697 augenrules[723]: backlog_limit 8192
Dec  1 03:12:46 np0005540697 augenrules[723]: lost 0
Dec  1 03:12:46 np0005540697 augenrules[723]: backlog 0
Dec  1 03:12:46 np0005540697 augenrules[723]: backlog_wait_time 60000
Dec  1 03:12:46 np0005540697 augenrules[723]: backlog_wait_time_actual 0
Dec  1 03:12:46 np0005540697 systemd[1]: Started Security Auditing Service.
Dec  1 03:12:46 np0005540697 systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Dec  1 03:12:46 np0005540697 systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Dec  1 03:12:46 np0005540697 systemd[1]: Finished Rebuild Hardware Database.
Dec  1 03:12:46 np0005540697 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Dec  1 03:12:46 np0005540697 systemd[1]: Starting Update is Completed...
Dec  1 03:12:46 np0005540697 systemd[1]: Finished Update is Completed.
Dec  1 03:12:46 np0005540697 systemd-udevd[731]: Using default interface naming scheme 'rhel-9.0'.
Dec  1 03:12:46 np0005540697 systemd[1]: Started Rule-based Manager for Device Events and Files.
Dec  1 03:12:46 np0005540697 systemd[1]: Reached target System Initialization.
Dec  1 03:12:46 np0005540697 systemd[1]: Started dnf makecache --timer.
Dec  1 03:12:46 np0005540697 systemd[1]: Started Daily rotation of log files.
Dec  1 03:12:46 np0005540697 systemd[1]: Started Daily Cleanup of Temporary Directories.
Dec  1 03:12:46 np0005540697 systemd[1]: Reached target Timer Units.
Dec  1 03:12:46 np0005540697 systemd[1]: Listening on D-Bus System Message Bus Socket.
Dec  1 03:12:46 np0005540697 systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Dec  1 03:12:46 np0005540697 systemd[1]: Reached target Socket Units.
Dec  1 03:12:46 np0005540697 systemd[1]: Starting D-Bus System Message Bus...
Dec  1 03:12:46 np0005540697 systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec  1 03:12:46 np0005540697 systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Dec  1 03:12:46 np0005540697 systemd[1]: Starting Load Kernel Module configfs...
Dec  1 03:12:46 np0005540697 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec  1 03:12:46 np0005540697 systemd[1]: Finished Load Kernel Module configfs.
Dec  1 03:12:46 np0005540697 systemd-udevd[741]: Network interface NamePolicy= disabled on kernel command line.
Dec  1 03:12:46 np0005540697 systemd[1]: Started D-Bus System Message Bus.
Dec  1 03:12:46 np0005540697 systemd[1]: Reached target Basic System.
Dec  1 03:12:46 np0005540697 dbus-broker-lau[767]: Ready
Dec  1 03:12:46 np0005540697 systemd[1]: Starting NTP client/server...
Dec  1 03:12:46 np0005540697 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Dec  1 03:12:46 np0005540697 kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Dec  1 03:12:46 np0005540697 systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Dec  1 03:12:46 np0005540697 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Dec  1 03:12:46 np0005540697 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Dec  1 03:12:46 np0005540697 systemd[1]: Starting Restore /run/initramfs on shutdown...
Dec  1 03:12:46 np0005540697 systemd[1]: Starting IPv4 firewall with iptables...
Dec  1 03:12:46 np0005540697 systemd[1]: Started irqbalance daemon.
Dec  1 03:12:46 np0005540697 systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Dec  1 03:12:46 np0005540697 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec  1 03:12:46 np0005540697 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec  1 03:12:46 np0005540697 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec  1 03:12:46 np0005540697 systemd[1]: Reached target sshd-keygen.target.
Dec  1 03:12:46 np0005540697 systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Dec  1 03:12:46 np0005540697 systemd[1]: Reached target User and Group Name Lookups.
Dec  1 03:12:46 np0005540697 chronyd[795]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Dec  1 03:12:46 np0005540697 chronyd[795]: Loaded 0 symmetric keys
Dec  1 03:12:46 np0005540697 chronyd[795]: Using right/UTC timezone to obtain leap second data
Dec  1 03:12:46 np0005540697 chronyd[795]: Loaded seccomp filter (level 2)
Dec  1 03:12:46 np0005540697 systemd[1]: Starting User Login Management...
Dec  1 03:12:46 np0005540697 systemd[1]: Started NTP client/server.
Dec  1 03:12:46 np0005540697 systemd[1]: Finished Restore /run/initramfs on shutdown.
Dec  1 03:12:46 np0005540697 systemd-logind[792]: Watching system buttons on /dev/input/event0 (Power Button)
Dec  1 03:12:46 np0005540697 systemd-logind[792]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Dec  1 03:12:46 np0005540697 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Dec  1 03:12:46 np0005540697 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Dec  1 03:12:46 np0005540697 kernel: Console: switching to colour dummy device 80x25
Dec  1 03:12:46 np0005540697 kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Dec  1 03:12:46 np0005540697 kernel: [drm] features: -context_init
Dec  1 03:12:46 np0005540697 systemd-logind[792]: New seat seat0.
Dec  1 03:12:46 np0005540697 systemd[1]: Started User Login Management.
Dec  1 03:12:46 np0005540697 kernel: [drm] number of scanouts: 1
Dec  1 03:12:46 np0005540697 kernel: [drm] number of cap sets: 0
Dec  1 03:12:46 np0005540697 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Dec  1 03:12:46 np0005540697 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Dec  1 03:12:46 np0005540697 kernel: Console: switching to colour frame buffer device 128x48
Dec  1 03:12:46 np0005540697 kernel: kvm_amd: TSC scaling supported
Dec  1 03:12:46 np0005540697 kernel: kvm_amd: Nested Virtualization enabled
Dec  1 03:12:46 np0005540697 kernel: kvm_amd: Nested Paging enabled
Dec  1 03:12:46 np0005540697 kernel: kvm_amd: LBR virtualization supported
Dec  1 03:12:46 np0005540697 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Dec  1 03:12:46 np0005540697 kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Dec  1 03:12:46 np0005540697 kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Dec  1 03:12:46 np0005540697 iptables.init[783]: iptables: Applying firewall rules: [  OK  ]
Dec  1 03:12:46 np0005540697 systemd[1]: Finished IPv4 firewall with iptables.
Dec  1 03:12:47 np0005540697 cloud-init[841]: Cloud-init v. 24.4-7.el9 running 'init-local' at Mon, 01 Dec 2025 08:12:46 +0000. Up 6.74 seconds.
Dec  1 03:12:47 np0005540697 systemd[1]: run-cloud\x2dinit-tmp-tmpolfhel4u.mount: Deactivated successfully.
Dec  1 03:12:47 np0005540697 systemd[1]: Starting Hostname Service...
Dec  1 03:12:47 np0005540697 systemd[1]: Started Hostname Service.
Dec  1 03:12:47 np0005540697 systemd-hostnamed[855]: Hostname set to <np0005540697.novalocal> (static)
Dec  1 03:12:47 np0005540697 systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Dec  1 03:12:47 np0005540697 systemd[1]: Reached target Preparation for Network.
Dec  1 03:12:47 np0005540697 systemd[1]: Starting Network Manager...
Dec  1 03:12:47 np0005540697 NetworkManager[859]: <info>  [1764576767.6249] NetworkManager (version 1.54.1-1.el9) is starting... (boot:92b7c342-66b4-4a80-acaf-17e049c1eafe)
Dec  1 03:12:47 np0005540697 NetworkManager[859]: <info>  [1764576767.6255] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Dec  1 03:12:47 np0005540697 NetworkManager[859]: <info>  [1764576767.6368] manager[0x55b48f283080]: monitoring kernel firmware directory '/lib/firmware'.
Dec  1 03:12:47 np0005540697 NetworkManager[859]: <info>  [1764576767.6427] hostname: hostname: using hostnamed
Dec  1 03:12:47 np0005540697 NetworkManager[859]: <info>  [1764576767.6427] hostname: static hostname changed from (none) to "np0005540697.novalocal"
Dec  1 03:12:47 np0005540697 NetworkManager[859]: <info>  [1764576767.6435] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Dec  1 03:12:47 np0005540697 NetworkManager[859]: <info>  [1764576767.6604] manager[0x55b48f283080]: rfkill: Wi-Fi hardware radio set enabled
Dec  1 03:12:47 np0005540697 NetworkManager[859]: <info>  [1764576767.6605] manager[0x55b48f283080]: rfkill: WWAN hardware radio set enabled
Dec  1 03:12:47 np0005540697 systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Dec  1 03:12:47 np0005540697 NetworkManager[859]: <info>  [1764576767.6706] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Dec  1 03:12:47 np0005540697 NetworkManager[859]: <info>  [1764576767.6707] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Dec  1 03:12:47 np0005540697 NetworkManager[859]: <info>  [1764576767.6708] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Dec  1 03:12:47 np0005540697 NetworkManager[859]: <info>  [1764576767.6708] manager: Networking is enabled by state file
Dec  1 03:12:47 np0005540697 NetworkManager[859]: <info>  [1764576767.6712] settings: Loaded settings plugin: keyfile (internal)
Dec  1 03:12:47 np0005540697 NetworkManager[859]: <info>  [1764576767.6724] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Dec  1 03:12:47 np0005540697 NetworkManager[859]: <info>  [1764576767.6757] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Dec  1 03:12:47 np0005540697 NetworkManager[859]: <info>  [1764576767.6775] dhcp: init: Using DHCP client 'internal'
Dec  1 03:12:47 np0005540697 NetworkManager[859]: <info>  [1764576767.6780] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Dec  1 03:12:47 np0005540697 NetworkManager[859]: <info>  [1764576767.6801] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  1 03:12:47 np0005540697 NetworkManager[859]: <info>  [1764576767.6812] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Dec  1 03:12:47 np0005540697 NetworkManager[859]: <info>  [1764576767.6824] device (lo): Activation: starting connection 'lo' (dfe05a7d-1dbe-4572-b7fc-2c1528ca0986)
Dec  1 03:12:47 np0005540697 NetworkManager[859]: <info>  [1764576767.6838] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Dec  1 03:12:47 np0005540697 NetworkManager[859]: <info>  [1764576767.6843] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  1 03:12:47 np0005540697 NetworkManager[859]: <info>  [1764576767.6881] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Dec  1 03:12:47 np0005540697 NetworkManager[859]: <info>  [1764576767.6886] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Dec  1 03:12:47 np0005540697 NetworkManager[859]: <info>  [1764576767.6891] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Dec  1 03:12:47 np0005540697 NetworkManager[859]: <info>  [1764576767.6893] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Dec  1 03:12:47 np0005540697 NetworkManager[859]: <info>  [1764576767.6897] device (eth0): carrier: link connected
Dec  1 03:12:47 np0005540697 NetworkManager[859]: <info>  [1764576767.6901] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Dec  1 03:12:47 np0005540697 NetworkManager[859]: <info>  [1764576767.6911] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Dec  1 03:12:47 np0005540697 NetworkManager[859]: <info>  [1764576767.6921] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec  1 03:12:47 np0005540697 NetworkManager[859]: <info>  [1764576767.6928] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec  1 03:12:47 np0005540697 NetworkManager[859]: <info>  [1764576767.6930] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  1 03:12:47 np0005540697 NetworkManager[859]: <info>  [1764576767.6934] manager: NetworkManager state is now CONNECTING
Dec  1 03:12:47 np0005540697 NetworkManager[859]: <info>  [1764576767.6937] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  1 03:12:47 np0005540697 NetworkManager[859]: <info>  [1764576767.6948] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  1 03:12:47 np0005540697 NetworkManager[859]: <info>  [1764576767.6954] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec  1 03:12:47 np0005540697 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec  1 03:12:47 np0005540697 NetworkManager[859]: <info>  [1764576767.6995] dhcp4 (eth0): state changed new lease, address=38.102.83.13
Dec  1 03:12:47 np0005540697 NetworkManager[859]: <info>  [1764576767.7010] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Dec  1 03:12:47 np0005540697 systemd[1]: Started Network Manager.
Dec  1 03:12:47 np0005540697 NetworkManager[859]: <info>  [1764576767.7048] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  1 03:12:47 np0005540697 systemd[1]: Reached target Network.
Dec  1 03:12:47 np0005540697 systemd[1]: Starting Network Manager Wait Online...
Dec  1 03:12:47 np0005540697 systemd[1]: Starting GSSAPI Proxy Daemon...
Dec  1 03:12:47 np0005540697 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec  1 03:12:47 np0005540697 NetworkManager[859]: <info>  [1764576767.7208] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Dec  1 03:12:47 np0005540697 NetworkManager[859]: <info>  [1764576767.7212] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Dec  1 03:12:47 np0005540697 NetworkManager[859]: <info>  [1764576767.7226] device (lo): Activation: successful, device activated.
Dec  1 03:12:47 np0005540697 NetworkManager[859]: <info>  [1764576767.7240] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  1 03:12:47 np0005540697 NetworkManager[859]: <info>  [1764576767.7243] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  1 03:12:47 np0005540697 NetworkManager[859]: <info>  [1764576767.7252] manager: NetworkManager state is now CONNECTED_SITE
Dec  1 03:12:47 np0005540697 NetworkManager[859]: <info>  [1764576767.7258] device (eth0): Activation: successful, device activated.
Dec  1 03:12:47 np0005540697 NetworkManager[859]: <info>  [1764576767.7269] manager: NetworkManager state is now CONNECTED_GLOBAL
Dec  1 03:12:47 np0005540697 NetworkManager[859]: <info>  [1764576767.7275] manager: startup complete
Dec  1 03:12:47 np0005540697 systemd[1]: Started GSSAPI Proxy Daemon.
Dec  1 03:12:47 np0005540697 systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Dec  1 03:12:47 np0005540697 systemd[1]: Reached target NFS client services.
Dec  1 03:12:47 np0005540697 systemd[1]: Reached target Preparation for Remote File Systems.
Dec  1 03:12:47 np0005540697 systemd[1]: Reached target Remote File Systems.
Dec  1 03:12:47 np0005540697 systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec  1 03:12:47 np0005540697 systemd[1]: Finished Network Manager Wait Online.
Dec  1 03:12:47 np0005540697 systemd[1]: Starting Cloud-init: Network Stage...
Dec  1 03:12:48 np0005540697 cloud-init[922]: Cloud-init v. 24.4-7.el9 running 'init' at Mon, 01 Dec 2025 08:12:48 +0000. Up 7.77 seconds.
Dec  1 03:12:48 np0005540697 cloud-init[922]: ci-info: ++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Dec  1 03:12:48 np0005540697 cloud-init[922]: ci-info: +--------+------+-----------------------------+---------------+--------+-------------------+
Dec  1 03:12:48 np0005540697 cloud-init[922]: ci-info: | Device |  Up  |           Address           |      Mask     | Scope  |     Hw-Address    |
Dec  1 03:12:48 np0005540697 cloud-init[922]: ci-info: +--------+------+-----------------------------+---------------+--------+-------------------+
Dec  1 03:12:48 np0005540697 cloud-init[922]: ci-info: |  eth0  | True |         38.102.83.13        | 255.255.255.0 | global | fa:16:3e:6a:0e:d6 |
Dec  1 03:12:48 np0005540697 cloud-init[922]: ci-info: |  eth0  | True | fe80::f816:3eff:fe6a:ed6/64 |       .       |  link  | fa:16:3e:6a:0e:d6 |
Dec  1 03:12:48 np0005540697 cloud-init[922]: ci-info: |   lo   | True |          127.0.0.1          |   255.0.0.0   |  host  |         .         |
Dec  1 03:12:48 np0005540697 cloud-init[922]: ci-info: |   lo   | True |           ::1/128           |       .       |  host  |         .         |
Dec  1 03:12:48 np0005540697 cloud-init[922]: ci-info: +--------+------+-----------------------------+---------------+--------+-------------------+
Dec  1 03:12:48 np0005540697 cloud-init[922]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Dec  1 03:12:48 np0005540697 cloud-init[922]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Dec  1 03:12:48 np0005540697 cloud-init[922]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Dec  1 03:12:48 np0005540697 cloud-init[922]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Dec  1 03:12:48 np0005540697 cloud-init[922]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Dec  1 03:12:48 np0005540697 cloud-init[922]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Dec  1 03:12:48 np0005540697 cloud-init[922]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Dec  1 03:12:48 np0005540697 cloud-init[922]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Dec  1 03:12:48 np0005540697 cloud-init[922]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Dec  1 03:12:48 np0005540697 cloud-init[922]: ci-info: +-------+-------------+---------+-----------+-------+
Dec  1 03:12:48 np0005540697 cloud-init[922]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Dec  1 03:12:48 np0005540697 cloud-init[922]: ci-info: +-------+-------------+---------+-----------+-------+
Dec  1 03:12:48 np0005540697 cloud-init[922]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Dec  1 03:12:48 np0005540697 cloud-init[922]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Dec  1 03:12:48 np0005540697 cloud-init[922]: ci-info: +-------+-------------+---------+-----------+-------+
Dec  1 03:12:49 np0005540697 cloud-init[922]: Generating public/private rsa key pair.
Dec  1 03:12:49 np0005540697 cloud-init[922]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Dec  1 03:12:49 np0005540697 cloud-init[922]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Dec  1 03:12:49 np0005540697 cloud-init[922]: The key fingerprint is:
Dec  1 03:12:49 np0005540697 cloud-init[922]: SHA256:X7zsn18NvCFxPXssF+JZDiIl4V1yEGtoM8v4VhxFDf4 root@np0005540697.novalocal
Dec  1 03:12:49 np0005540697 cloud-init[922]: The key's randomart image is:
Dec  1 03:12:49 np0005540697 cloud-init[922]: +---[RSA 3072]----+
Dec  1 03:12:49 np0005540697 cloud-init[922]: |         o.=+*o  |
Dec  1 03:12:49 np0005540697 cloud-init[922]: |        . = B  o |
Dec  1 03:12:49 np0005540697 cloud-init[922]: |         B *.+.+.|
Dec  1 03:12:49 np0005540697 cloud-init[922]: |        + B.++B.+|
Dec  1 03:12:49 np0005540697 cloud-init[922]: |       .So o+o=E+|
Dec  1 03:12:49 np0005540697 cloud-init[922]: |        ...o o *o|
Dec  1 03:12:49 np0005540697 cloud-init[922]: |         o. o . o|
Dec  1 03:12:49 np0005540697 cloud-init[922]: |        .  .   ..|
Dec  1 03:12:49 np0005540697 cloud-init[922]: |            ..o..|
Dec  1 03:12:49 np0005540697 cloud-init[922]: +----[SHA256]-----+
Dec  1 03:12:49 np0005540697 cloud-init[922]: Generating public/private ecdsa key pair.
Dec  1 03:12:49 np0005540697 cloud-init[922]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Dec  1 03:12:49 np0005540697 cloud-init[922]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Dec  1 03:12:49 np0005540697 cloud-init[922]: The key fingerprint is:
Dec  1 03:12:49 np0005540697 cloud-init[922]: SHA256:T79HJ15g7qNUKydIsRHPRQQn5x0B8vB/jfHjVZjGS10 root@np0005540697.novalocal
Dec  1 03:12:49 np0005540697 cloud-init[922]: The key's randomart image is:
Dec  1 03:12:49 np0005540697 cloud-init[922]: +---[ECDSA 256]---+
Dec  1 03:12:49 np0005540697 cloud-init[922]: |          .oo=BoE|
Dec  1 03:12:49 np0005540697 cloud-init[922]: |           +=B =o|
Dec  1 03:12:49 np0005540697 cloud-init[922]: |          o ooO +|
Dec  1 03:12:49 np0005540697 cloud-init[922]: |           + o+=o|
Dec  1 03:12:49 np0005540697 cloud-init[922]: |        S +  o++=|
Dec  1 03:12:49 np0005540697 cloud-init[922]: |         + o .=.*|
Dec  1 03:12:49 np0005540697 cloud-init[922]: |          o =+o= |
Dec  1 03:12:49 np0005540697 cloud-init[922]: |           . ==  |
Dec  1 03:12:49 np0005540697 cloud-init[922]: |            oo . |
Dec  1 03:12:49 np0005540697 cloud-init[922]: +----[SHA256]-----+
Dec  1 03:12:49 np0005540697 cloud-init[922]: Generating public/private ed25519 key pair.
Dec  1 03:12:49 np0005540697 cloud-init[922]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Dec  1 03:12:49 np0005540697 cloud-init[922]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Dec  1 03:12:49 np0005540697 cloud-init[922]: The key fingerprint is:
Dec  1 03:12:49 np0005540697 cloud-init[922]: SHA256:G4uh+E4WBwZJW6Qbwy9axEPZ/01TqZROhoDISLJFDbw root@np0005540697.novalocal
Dec  1 03:12:49 np0005540697 cloud-init[922]: The key's randomart image is:
Dec  1 03:12:49 np0005540697 cloud-init[922]: +--[ED25519 256]--+
Dec  1 03:12:49 np0005540697 cloud-init[922]: |oBBOo... . . .   |
Dec  1 03:12:49 np0005540697 cloud-init[922]: |o**=+   . = o    |
Dec  1 03:12:49 np0005540697 cloud-init[922]: |. X.o.   = o     |
Dec  1 03:12:49 np0005540697 cloud-init[922]: | .EB ..   =      |
Dec  1 03:12:49 np0005540697 cloud-init[922]: |  + o o.So .     |
Dec  1 03:12:49 np0005540697 cloud-init[922]: | o o + o.+.      |
Dec  1 03:12:49 np0005540697 cloud-init[922]: |. . + . o        |
Dec  1 03:12:49 np0005540697 cloud-init[922]: |   +             |
Dec  1 03:12:49 np0005540697 cloud-init[922]: |   .o            |
Dec  1 03:12:49 np0005540697 cloud-init[922]: +----[SHA256]-----+
Dec  1 03:12:49 np0005540697 sm-notify[1005]: Version 2.5.4 starting
Dec  1 03:12:49 np0005540697 systemd[1]: Finished Cloud-init: Network Stage.
Dec  1 03:12:49 np0005540697 systemd[1]: Reached target Cloud-config availability.
Dec  1 03:12:49 np0005540697 systemd[1]: Reached target Network is Online.
Dec  1 03:12:49 np0005540697 systemd[1]: Starting Cloud-init: Config Stage...
Dec  1 03:12:49 np0005540697 systemd[1]: Starting Crash recovery kernel arming...
Dec  1 03:12:49 np0005540697 systemd[1]: Starting Notify NFS peers of a restart...
Dec  1 03:12:49 np0005540697 systemd[1]: Starting System Logging Service...
Dec  1 03:12:49 np0005540697 systemd[1]: Starting OpenSSH server daemon...
Dec  1 03:12:49 np0005540697 systemd[1]: Starting Permit User Sessions...
Dec  1 03:12:49 np0005540697 systemd[1]: Started Notify NFS peers of a restart.
Dec  1 03:12:49 np0005540697 systemd[1]: Started OpenSSH server daemon.
Dec  1 03:12:49 np0005540697 systemd[1]: Finished Permit User Sessions.
Dec  1 03:12:49 np0005540697 systemd[1]: Started Command Scheduler.
Dec  1 03:12:49 np0005540697 systemd[1]: Started Getty on tty1.
Dec  1 03:12:49 np0005540697 systemd[1]: Started Serial Getty on ttyS0.
Dec  1 03:12:49 np0005540697 systemd[1]: Reached target Login Prompts.
Dec  1 03:12:49 np0005540697 rsyslogd[1006]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1006" x-info="https://www.rsyslog.com"] start
Dec  1 03:12:49 np0005540697 systemd[1]: Started System Logging Service.
Dec  1 03:12:49 np0005540697 rsyslogd[1006]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Dec  1 03:12:49 np0005540697 systemd[1]: Reached target Multi-User System.
Dec  1 03:12:49 np0005540697 systemd[1]: Starting Record Runlevel Change in UTMP...
Dec  1 03:12:49 np0005540697 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Dec  1 03:12:49 np0005540697 systemd[1]: Finished Record Runlevel Change in UTMP.
Dec  1 03:12:49 np0005540697 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  1 03:12:49 np0005540697 kdumpctl[1015]: kdump: No kdump initial ramdisk found.
Dec  1 03:12:49 np0005540697 kdumpctl[1015]: kdump: Rebuilding /boot/initramfs-5.14.0-642.el9.x86_64kdump.img
Dec  1 03:12:49 np0005540697 cloud-init[1110]: Cloud-init v. 24.4-7.el9 running 'modules:config' at Mon, 01 Dec 2025 08:12:49 +0000. Up 9.57 seconds.
Dec  1 03:12:49 np0005540697 systemd[1]: Finished Cloud-init: Config Stage.
Dec  1 03:12:49 np0005540697 systemd[1]: Starting Cloud-init: Final Stage...
Dec  1 03:12:50 np0005540697 cloud-init[1256]: Cloud-init v. 24.4-7.el9 running 'modules:final' at Mon, 01 Dec 2025 08:12:50 +0000. Up 9.97 seconds.
Dec  1 03:12:50 np0005540697 cloud-init[1269]: #############################################################
Dec  1 03:12:50 np0005540697 cloud-init[1272]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Dec  1 03:12:50 np0005540697 dracut[1271]: dracut-057-102.git20250818.el9
Dec  1 03:12:50 np0005540697 cloud-init[1275]: 256 SHA256:T79HJ15g7qNUKydIsRHPRQQn5x0B8vB/jfHjVZjGS10 root@np0005540697.novalocal (ECDSA)
Dec  1 03:12:50 np0005540697 cloud-init[1283]: 256 SHA256:G4uh+E4WBwZJW6Qbwy9axEPZ/01TqZROhoDISLJFDbw root@np0005540697.novalocal (ED25519)
Dec  1 03:12:50 np0005540697 cloud-init[1293]: 3072 SHA256:X7zsn18NvCFxPXssF+JZDiIl4V1yEGtoM8v4VhxFDf4 root@np0005540697.novalocal (RSA)
Dec  1 03:12:50 np0005540697 cloud-init[1294]: -----END SSH HOST KEY FINGERPRINTS-----
Dec  1 03:12:50 np0005540697 cloud-init[1295]: #############################################################
Dec  1 03:12:50 np0005540697 cloud-init[1256]: Cloud-init v. 24.4-7.el9 finished at Mon, 01 Dec 2025 08:12:50 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 10.16 seconds
Dec  1 03:12:50 np0005540697 systemd[1]: Finished Cloud-init: Final Stage.
Dec  1 03:12:50 np0005540697 systemd[1]: Reached target Cloud-init target.
Dec  1 03:12:50 np0005540697 dracut[1276]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/b277050f-8ace-464d-abb6-4c46d4c45253 /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-642.el9.x86_64kdump.img 5.14.0-642.el9.x86_64
Dec  1 03:12:51 np0005540697 dracut[1276]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Dec  1 03:12:51 np0005540697 dracut[1276]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Dec  1 03:12:51 np0005540697 dracut[1276]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Dec  1 03:12:51 np0005540697 dracut[1276]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Dec  1 03:12:51 np0005540697 dracut[1276]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Dec  1 03:12:51 np0005540697 dracut[1276]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Dec  1 03:12:51 np0005540697 dracut[1276]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Dec  1 03:12:51 np0005540697 dracut[1276]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Dec  1 03:12:51 np0005540697 dracut[1276]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Dec  1 03:12:51 np0005540697 dracut[1276]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Dec  1 03:12:51 np0005540697 dracut[1276]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Dec  1 03:12:51 np0005540697 dracut[1276]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Dec  1 03:12:51 np0005540697 dracut[1276]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Dec  1 03:12:51 np0005540697 dracut[1276]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Dec  1 03:12:51 np0005540697 dracut[1276]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Dec  1 03:12:51 np0005540697 dracut[1276]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Dec  1 03:12:51 np0005540697 dracut[1276]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Dec  1 03:12:51 np0005540697 dracut[1276]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Dec  1 03:12:51 np0005540697 dracut[1276]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Dec  1 03:12:51 np0005540697 dracut[1276]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Dec  1 03:12:51 np0005540697 dracut[1276]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Dec  1 03:12:51 np0005540697 dracut[1276]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Dec  1 03:12:51 np0005540697 dracut[1276]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Dec  1 03:12:51 np0005540697 dracut[1276]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Dec  1 03:12:51 np0005540697 dracut[1276]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Dec  1 03:12:51 np0005540697 dracut[1276]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Dec  1 03:12:51 np0005540697 dracut[1276]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Dec  1 03:12:51 np0005540697 dracut[1276]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Dec  1 03:12:51 np0005540697 dracut[1276]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Dec  1 03:12:51 np0005540697 dracut[1276]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Dec  1 03:12:51 np0005540697 dracut[1276]: memstrack is not available
Dec  1 03:12:51 np0005540697 dracut[1276]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Dec  1 03:12:51 np0005540697 dracut[1276]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Dec  1 03:12:51 np0005540697 dracut[1276]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Dec  1 03:12:51 np0005540697 dracut[1276]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Dec  1 03:12:51 np0005540697 dracut[1276]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Dec  1 03:12:51 np0005540697 dracut[1276]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Dec  1 03:12:51 np0005540697 dracut[1276]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Dec  1 03:12:51 np0005540697 dracut[1276]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Dec  1 03:12:51 np0005540697 dracut[1276]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Dec  1 03:12:51 np0005540697 dracut[1276]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Dec  1 03:12:51 np0005540697 dracut[1276]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Dec  1 03:12:51 np0005540697 dracut[1276]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Dec  1 03:12:51 np0005540697 dracut[1276]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Dec  1 03:12:51 np0005540697 dracut[1276]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Dec  1 03:12:51 np0005540697 dracut[1276]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Dec  1 03:12:51 np0005540697 dracut[1276]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Dec  1 03:12:51 np0005540697 dracut[1276]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Dec  1 03:12:51 np0005540697 dracut[1276]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Dec  1 03:12:52 np0005540697 dracut[1276]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Dec  1 03:12:52 np0005540697 dracut[1276]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Dec  1 03:12:52 np0005540697 dracut[1276]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Dec  1 03:12:52 np0005540697 dracut[1276]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Dec  1 03:12:52 np0005540697 dracut[1276]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Dec  1 03:12:52 np0005540697 dracut[1276]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Dec  1 03:12:52 np0005540697 dracut[1276]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Dec  1 03:12:52 np0005540697 dracut[1276]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Dec  1 03:12:52 np0005540697 dracut[1276]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Dec  1 03:12:52 np0005540697 dracut[1276]: memstrack is not available
Dec  1 03:12:52 np0005540697 dracut[1276]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Dec  1 03:12:52 np0005540697 dracut[1276]: *** Including module: systemd ***
Dec  1 03:12:52 np0005540697 chronyd[795]: Selected source 167.160.187.179 (2.centos.pool.ntp.org)
Dec  1 03:12:52 np0005540697 chronyd[795]: System clock TAI offset set to 37 seconds
Dec  1 03:12:52 np0005540697 dracut[1276]: *** Including module: fips ***
Dec  1 03:12:53 np0005540697 dracut[1276]: *** Including module: systemd-initrd ***
Dec  1 03:12:53 np0005540697 dracut[1276]: *** Including module: i18n ***
Dec  1 03:12:53 np0005540697 dracut[1276]: *** Including module: drm ***
Dec  1 03:12:53 np0005540697 dracut[1276]: *** Including module: prefixdevname ***
Dec  1 03:12:53 np0005540697 dracut[1276]: *** Including module: kernel-modules ***
Dec  1 03:12:53 np0005540697 kernel: block vda: the capability attribute has been deprecated.
Dec  1 03:12:54 np0005540697 dracut[1276]: *** Including module: kernel-modules-extra ***
Dec  1 03:12:54 np0005540697 dracut[1276]: *** Including module: qemu ***
Dec  1 03:12:54 np0005540697 dracut[1276]: *** Including module: fstab-sys ***
Dec  1 03:12:54 np0005540697 dracut[1276]: *** Including module: rootfs-block ***
Dec  1 03:12:54 np0005540697 chronyd[795]: Selected source 206.108.0.133 (2.centos.pool.ntp.org)
Dec  1 03:12:54 np0005540697 dracut[1276]: *** Including module: terminfo ***
Dec  1 03:12:54 np0005540697 dracut[1276]: *** Including module: udev-rules ***
Dec  1 03:12:55 np0005540697 dracut[1276]: Skipping udev rule: 91-permissions.rules
Dec  1 03:12:55 np0005540697 dracut[1276]: Skipping udev rule: 80-drivers-modprobe.rules
Dec  1 03:12:55 np0005540697 dracut[1276]: *** Including module: virtiofs ***
Dec  1 03:12:55 np0005540697 dracut[1276]: *** Including module: dracut-systemd ***
Dec  1 03:12:55 np0005540697 dracut[1276]: *** Including module: usrmount ***
Dec  1 03:12:55 np0005540697 dracut[1276]: *** Including module: base ***
Dec  1 03:12:55 np0005540697 dracut[1276]: *** Including module: fs-lib ***
Dec  1 03:12:55 np0005540697 dracut[1276]: *** Including module: kdumpbase ***
Dec  1 03:12:56 np0005540697 dracut[1276]: *** Including module: microcode_ctl-fw_dir_override ***
Dec  1 03:12:56 np0005540697 dracut[1276]:  microcode_ctl module: mangling fw_dir
Dec  1 03:12:56 np0005540697 dracut[1276]:    microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Dec  1 03:12:56 np0005540697 dracut[1276]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Dec  1 03:12:56 np0005540697 dracut[1276]:    microcode_ctl: configuration "intel" is ignored
Dec  1 03:12:56 np0005540697 dracut[1276]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Dec  1 03:12:56 np0005540697 dracut[1276]:    microcode_ctl: configuration "intel-06-2d-07" is ignored
Dec  1 03:12:56 np0005540697 dracut[1276]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Dec  1 03:12:56 np0005540697 dracut[1276]:    microcode_ctl: configuration "intel-06-4e-03" is ignored
Dec  1 03:12:56 np0005540697 dracut[1276]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Dec  1 03:12:56 np0005540697 dracut[1276]:    microcode_ctl: configuration "intel-06-4f-01" is ignored
Dec  1 03:12:56 np0005540697 dracut[1276]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Dec  1 03:12:56 np0005540697 dracut[1276]:    microcode_ctl: configuration "intel-06-55-04" is ignored
Dec  1 03:12:56 np0005540697 dracut[1276]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Dec  1 03:12:56 np0005540697 dracut[1276]:    microcode_ctl: configuration "intel-06-5e-03" is ignored
Dec  1 03:12:56 np0005540697 dracut[1276]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Dec  1 03:12:56 np0005540697 dracut[1276]:    microcode_ctl: configuration "intel-06-8c-01" is ignored
Dec  1 03:12:56 np0005540697 dracut[1276]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Dec  1 03:12:56 np0005540697 dracut[1276]:    microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Dec  1 03:12:56 np0005540697 dracut[1276]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Dec  1 03:12:56 np0005540697 dracut[1276]:    microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Dec  1 03:12:56 np0005540697 dracut[1276]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Dec  1 03:12:56 np0005540697 dracut[1276]:    microcode_ctl: configuration "intel-06-8f-08" is ignored
Dec  1 03:12:56 np0005540697 dracut[1276]:    microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Dec  1 03:12:56 np0005540697 dracut[1276]: *** Including module: openssl ***
Dec  1 03:12:56 np0005540697 dracut[1276]: *** Including module: shutdown ***
Dec  1 03:12:56 np0005540697 dracut[1276]: *** Including module: squash ***
Dec  1 03:12:56 np0005540697 dracut[1276]: *** Including modules done ***
Dec  1 03:12:56 np0005540697 dracut[1276]: *** Installing kernel module dependencies ***
Dec  1 03:12:57 np0005540697 irqbalance[786]: Cannot change IRQ 35 affinity: Operation not permitted
Dec  1 03:12:57 np0005540697 irqbalance[786]: IRQ 35 affinity is now unmanaged
Dec  1 03:12:57 np0005540697 irqbalance[786]: Cannot change IRQ 33 affinity: Operation not permitted
Dec  1 03:12:57 np0005540697 irqbalance[786]: IRQ 33 affinity is now unmanaged
Dec  1 03:12:57 np0005540697 irqbalance[786]: Cannot change IRQ 31 affinity: Operation not permitted
Dec  1 03:12:57 np0005540697 irqbalance[786]: IRQ 31 affinity is now unmanaged
Dec  1 03:12:57 np0005540697 irqbalance[786]: Cannot change IRQ 28 affinity: Operation not permitted
Dec  1 03:12:57 np0005540697 irqbalance[786]: IRQ 28 affinity is now unmanaged
Dec  1 03:12:57 np0005540697 irqbalance[786]: Cannot change IRQ 34 affinity: Operation not permitted
Dec  1 03:12:57 np0005540697 irqbalance[786]: IRQ 34 affinity is now unmanaged
Dec  1 03:12:57 np0005540697 irqbalance[786]: Cannot change IRQ 32 affinity: Operation not permitted
Dec  1 03:12:57 np0005540697 irqbalance[786]: IRQ 32 affinity is now unmanaged
Dec  1 03:12:57 np0005540697 irqbalance[786]: Cannot change IRQ 30 affinity: Operation not permitted
Dec  1 03:12:57 np0005540697 irqbalance[786]: IRQ 30 affinity is now unmanaged
Dec  1 03:12:57 np0005540697 irqbalance[786]: Cannot change IRQ 29 affinity: Operation not permitted
Dec  1 03:12:57 np0005540697 irqbalance[786]: IRQ 29 affinity is now unmanaged
Dec  1 03:12:57 np0005540697 dracut[1276]: *** Installing kernel module dependencies done ***
Dec  1 03:12:57 np0005540697 dracut[1276]: *** Resolving executable dependencies ***
Dec  1 03:12:57 np0005540697 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec  1 03:12:59 np0005540697 dracut[1276]: *** Resolving executable dependencies done ***
Dec  1 03:12:59 np0005540697 dracut[1276]: *** Generating early-microcode cpio image ***
Dec  1 03:12:59 np0005540697 dracut[1276]: *** Store current command line parameters ***
Dec  1 03:12:59 np0005540697 dracut[1276]: Stored kernel commandline:
Dec  1 03:12:59 np0005540697 dracut[1276]: No dracut internal kernel commandline stored in the initramfs
Dec  1 03:12:59 np0005540697 dracut[1276]: *** Install squash loader ***
Dec  1 03:13:00 np0005540697 dracut[1276]: *** Squashing the files inside the initramfs ***
Dec  1 03:13:01 np0005540697 dracut[1276]: *** Squashing the files inside the initramfs done ***
Dec  1 03:13:01 np0005540697 dracut[1276]: *** Creating image file '/boot/initramfs-5.14.0-642.el9.x86_64kdump.img' ***
Dec  1 03:13:01 np0005540697 dracut[1276]: *** Hardlinking files ***
Dec  1 03:13:01 np0005540697 dracut[1276]: *** Hardlinking files done ***
Dec  1 03:13:01 np0005540697 dracut[1276]: *** Creating initramfs image file '/boot/initramfs-5.14.0-642.el9.x86_64kdump.img' done ***
Dec  1 03:13:02 np0005540697 kdumpctl[1015]: kdump: kexec: loaded kdump kernel
Dec  1 03:13:02 np0005540697 kdumpctl[1015]: kdump: Starting kdump: [OK]
Dec  1 03:13:02 np0005540697 systemd[1]: Finished Crash recovery kernel arming.
Dec  1 03:13:02 np0005540697 systemd[1]: Startup finished in 1.731s (kernel) + 2.899s (initrd) + 17.705s (userspace) = 22.336s.
Dec  1 03:13:04 np0005540697 systemd[1]: Created slice User Slice of UID 1000.
Dec  1 03:13:04 np0005540697 systemd[1]: Starting User Runtime Directory /run/user/1000...
Dec  1 03:13:04 np0005540697 systemd-logind[792]: New session 1 of user zuul.
Dec  1 03:13:04 np0005540697 systemd[1]: Finished User Runtime Directory /run/user/1000.
Dec  1 03:13:04 np0005540697 systemd[1]: Starting User Manager for UID 1000...
Dec  1 03:13:05 np0005540697 systemd[4301]: Queued start job for default target Main User Target.
Dec  1 03:13:05 np0005540697 systemd[4301]: Created slice User Application Slice.
Dec  1 03:13:05 np0005540697 systemd[4301]: Started Mark boot as successful after the user session has run 2 minutes.
Dec  1 03:13:05 np0005540697 systemd[4301]: Started Daily Cleanup of User's Temporary Directories.
Dec  1 03:13:05 np0005540697 systemd[4301]: Reached target Paths.
Dec  1 03:13:05 np0005540697 systemd[4301]: Reached target Timers.
Dec  1 03:13:05 np0005540697 systemd[4301]: Starting D-Bus User Message Bus Socket...
Dec  1 03:13:05 np0005540697 systemd[4301]: Starting Create User's Volatile Files and Directories...
Dec  1 03:13:05 np0005540697 systemd[4301]: Listening on D-Bus User Message Bus Socket.
Dec  1 03:13:05 np0005540697 systemd[4301]: Reached target Sockets.
Dec  1 03:13:05 np0005540697 systemd[4301]: Finished Create User's Volatile Files and Directories.
Dec  1 03:13:05 np0005540697 systemd[4301]: Reached target Basic System.
Dec  1 03:13:05 np0005540697 systemd[4301]: Reached target Main User Target.
Dec  1 03:13:05 np0005540697 systemd[4301]: Startup finished in 131ms.
Dec  1 03:13:05 np0005540697 systemd[1]: Started User Manager for UID 1000.
Dec  1 03:13:05 np0005540697 systemd[1]: Started Session 1 of User zuul.
Dec  1 03:13:05 np0005540697 python3[4383]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 03:13:08 np0005540697 python3[4411]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 03:13:14 np0005540697 python3[4471]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 03:13:15 np0005540697 python3[4511]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Dec  1 03:13:17 np0005540697 python3[4537]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC2Su/OUquVYJHxwrfz53i8i81ePVx1Mvcy+v3DX88fHUvdTPz+vyfOfp18dgui3D4rKvIHBsy/ssLMsX1OwsVKYUjZ9a0oOV2n/TtQrMoFXg9vwliJ0P0+ld4Rg/b47G6JCQcnlM8O8Sw35PY1Txqs/KqHq0zWIsFQx5kr0W8Tlpo1cXeN/ajdzB7/m8xScdzdBQrbu580MC3L0d6HLxbZZd9TBM/Nwn29lN0doNHi9I0PFjtCc3r/LA/EGZuDv3Av9dXmcRD+cdxPb/Pwxd/u9Z58VGquJpoK0nK+VCs6jeLil1Zxp2BZzR/IRK0Zfzm19vkghZ1IIRIxuXimccOKGTzMnCxWjY90rSHeqF9jF4te6GL8abRWMt6j0RU+iiyQQfy6i6V8UtXCiT8MAXy7gz1WnlrPDbl1E8GOqgKTK6JAyJEKlnrYYaYYDUQu9Bw/rxbN6CQR5pyNglxUUHTTZZDT3o4uacz/Qf+AAJMJAyYmhY5vivsczo6YtG4Ub6M= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  1 03:13:17 np0005540697 python3[4561]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:13:17 np0005540697 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec  1 03:13:18 np0005540697 python3[4662]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  1 03:13:18 np0005540697 python3[4733]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764576797.7024016-207-227120625487651/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=76d84aaba0f04d77b0d5555a62a2bfbb_id_rsa follow=False checksum=b6c8584267bb5a86e1d4b2d767fa2ffc6a6ede38 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:13:19 np0005540697 python3[4856]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  1 03:13:19 np0005540697 python3[4927]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764576798.680289-240-21684791114251/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=76d84aaba0f04d77b0d5555a62a2bfbb_id_rsa.pub follow=False checksum=712ef5670a7853837c5fb537b5b2e4e33efa33a2 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:13:20 np0005540697 python3[4975]: ansible-ping Invoked with data=pong
Dec  1 03:13:21 np0005540697 python3[4999]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 03:13:24 np0005540697 python3[5057]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Dec  1 03:13:25 np0005540697 python3[5089]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:13:25 np0005540697 python3[5113]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:13:25 np0005540697 python3[5137]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:13:26 np0005540697 python3[5161]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:13:26 np0005540697 python3[5185]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:13:26 np0005540697 python3[5209]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:13:28 np0005540697 python3[5235]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:13:28 np0005540697 python3[5313]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  1 03:13:29 np0005540697 python3[5386]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764576808.5084236-21-134272226716512/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:13:30 np0005540697 python3[5434]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  1 03:13:30 np0005540697 python3[5458]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  1 03:13:30 np0005540697 python3[5482]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  1 03:13:30 np0005540697 python3[5506]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  1 03:13:31 np0005540697 python3[5530]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  1 03:13:31 np0005540697 python3[5554]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  1 03:13:31 np0005540697 python3[5578]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  1 03:13:32 np0005540697 python3[5602]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  1 03:13:32 np0005540697 python3[5626]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  1 03:13:32 np0005540697 python3[5650]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  1 03:13:33 np0005540697 python3[5674]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  1 03:13:33 np0005540697 python3[5698]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  1 03:13:33 np0005540697 python3[5722]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  1 03:13:33 np0005540697 python3[5746]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  1 03:13:34 np0005540697 python3[5770]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  1 03:13:34 np0005540697 python3[5794]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  1 03:13:34 np0005540697 python3[5818]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  1 03:13:35 np0005540697 python3[5842]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  1 03:13:35 np0005540697 python3[5866]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  1 03:13:35 np0005540697 python3[5890]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  1 03:13:36 np0005540697 python3[5914]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  1 03:13:36 np0005540697 python3[5938]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  1 03:13:36 np0005540697 python3[5962]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  1 03:13:36 np0005540697 python3[5986]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  1 03:13:37 np0005540697 python3[6010]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  1 03:13:37 np0005540697 python3[6034]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  1 03:13:39 np0005540697 python3[6060]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Dec  1 03:13:39 np0005540697 systemd[1]: Starting Time & Date Service...
Dec  1 03:13:39 np0005540697 systemd[1]: Started Time & Date Service.
Dec  1 03:13:39 np0005540697 systemd-timedated[6062]: Changed time zone to 'UTC' (UTC).
Dec  1 03:13:40 np0005540697 python3[6091]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:13:40 np0005540697 python3[6167]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  1 03:13:41 np0005540697 python3[6238]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1764576820.4987547-153-209102110625277/source _original_basename=tmpsc7x6c6z follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:13:41 np0005540697 python3[6338]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  1 03:13:42 np0005540697 python3[6409]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1764576821.3911932-183-234143815436417/source _original_basename=tmp4tgk9onf follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:13:42 np0005540697 python3[6511]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  1 03:13:43 np0005540697 python3[6584]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1764576822.4585092-231-128552753435984/source _original_basename=tmpbkaugdsj follow=False checksum=d994f5a0f8305d9967bdf6cc68f2476e459dce01 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:13:43 np0005540697 python3[6632]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 03:13:44 np0005540697 python3[6658]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 03:13:44 np0005540697 python3[6738]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  1 03:13:44 np0005540697 python3[6811]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1764576824.2152731-273-170970627329336/source _original_basename=tmpk2xmmgo1 follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:13:45 np0005540697 python3[6862]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163ec2-ffbe-9a4c-73a2-00000000001d-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 03:13:46 np0005540697 python3[6890]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env#012 _uses_shell=True zuul_log_id=fa163ec2-ffbe-9a4c-73a2-00000000001e-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Dec  1 03:13:47 np0005540697 python3[6918]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:14:03 np0005540697 python3[6944]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:14:09 np0005540697 systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec  1 03:14:36 np0005540697 kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Dec  1 03:14:36 np0005540697 kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Dec  1 03:14:36 np0005540697 kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Dec  1 03:14:36 np0005540697 kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Dec  1 03:14:36 np0005540697 kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Dec  1 03:14:36 np0005540697 kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Dec  1 03:14:36 np0005540697 kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Dec  1 03:14:36 np0005540697 kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Dec  1 03:14:36 np0005540697 kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Dec  1 03:14:36 np0005540697 kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Dec  1 03:14:36 np0005540697 NetworkManager[859]: <info>  [1764576876.8033] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Dec  1 03:14:36 np0005540697 systemd-udevd[6947]: Network interface NamePolicy= disabled on kernel command line.
Dec  1 03:14:36 np0005540697 NetworkManager[859]: <info>  [1764576876.8201] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  1 03:14:36 np0005540697 NetworkManager[859]: <info>  [1764576876.8225] settings: (eth1): created default wired connection 'Wired connection 1'
Dec  1 03:14:36 np0005540697 NetworkManager[859]: <info>  [1764576876.8228] device (eth1): carrier: link connected
Dec  1 03:14:36 np0005540697 NetworkManager[859]: <info>  [1764576876.8229] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Dec  1 03:14:36 np0005540697 NetworkManager[859]: <info>  [1764576876.8234] policy: auto-activating connection 'Wired connection 1' (c03566d5-28ee-35b2-b3f7-ec229e142493)
Dec  1 03:14:36 np0005540697 NetworkManager[859]: <info>  [1764576876.8237] device (eth1): Activation: starting connection 'Wired connection 1' (c03566d5-28ee-35b2-b3f7-ec229e142493)
Dec  1 03:14:36 np0005540697 NetworkManager[859]: <info>  [1764576876.8237] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  1 03:14:36 np0005540697 NetworkManager[859]: <info>  [1764576876.8240] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  1 03:14:36 np0005540697 NetworkManager[859]: <info>  [1764576876.8243] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  1 03:14:36 np0005540697 NetworkManager[859]: <info>  [1764576876.8246] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Dec  1 03:14:37 np0005540697 python3[6974]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163ec2-ffbe-4322-d8ea-0000000000fc-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 03:14:44 np0005540697 python3[7056]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  1 03:14:45 np0005540697 python3[7129]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764576884.4453833-102-29132801840939/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=74b143999ba8d8c7d127f00278b52fb0d27966c7 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:14:46 np0005540697 python3[7179]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  1 03:14:46 np0005540697 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Dec  1 03:14:46 np0005540697 systemd[1]: Stopped Network Manager Wait Online.
Dec  1 03:14:46 np0005540697 systemd[1]: Stopping Network Manager Wait Online...
Dec  1 03:14:46 np0005540697 NetworkManager[859]: <info>  [1764576886.1574] caught SIGTERM, shutting down normally.
Dec  1 03:14:46 np0005540697 systemd[1]: Stopping Network Manager...
Dec  1 03:14:46 np0005540697 NetworkManager[859]: <info>  [1764576886.1583] dhcp4 (eth0): canceled DHCP transaction
Dec  1 03:14:46 np0005540697 NetworkManager[859]: <info>  [1764576886.1583] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec  1 03:14:46 np0005540697 NetworkManager[859]: <info>  [1764576886.1583] dhcp4 (eth0): state changed no lease
Dec  1 03:14:46 np0005540697 NetworkManager[859]: <info>  [1764576886.1586] manager: NetworkManager state is now CONNECTING
Dec  1 03:14:46 np0005540697 NetworkManager[859]: <info>  [1764576886.1680] dhcp4 (eth1): canceled DHCP transaction
Dec  1 03:14:46 np0005540697 NetworkManager[859]: <info>  [1764576886.1680] dhcp4 (eth1): state changed no lease
Dec  1 03:14:46 np0005540697 NetworkManager[859]: <info>  [1764576886.1744] exiting (success)
Dec  1 03:14:46 np0005540697 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec  1 03:14:46 np0005540697 systemd[1]: NetworkManager.service: Deactivated successfully.
Dec  1 03:14:46 np0005540697 systemd[1]: Stopped Network Manager.
Dec  1 03:14:46 np0005540697 systemd[1]: Starting Network Manager...
Dec  1 03:14:46 np0005540697 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec  1 03:14:46 np0005540697 NetworkManager[7183]: <info>  [1764576886.2411] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:92b7c342-66b4-4a80-acaf-17e049c1eafe)
Dec  1 03:14:46 np0005540697 NetworkManager[7183]: <info>  [1764576886.2414] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Dec  1 03:14:46 np0005540697 NetworkManager[7183]: <info>  [1764576886.2510] manager[0x5604f6e30070]: monitoring kernel firmware directory '/lib/firmware'.
Dec  1 03:14:46 np0005540697 systemd[1]: Starting Hostname Service...
Dec  1 03:14:46 np0005540697 systemd[1]: Started Hostname Service.
Dec  1 03:14:46 np0005540697 NetworkManager[7183]: <info>  [1764576886.3473] hostname: hostname: using hostnamed
Dec  1 03:14:46 np0005540697 NetworkManager[7183]: <info>  [1764576886.3476] hostname: static hostname changed from (none) to "np0005540697.novalocal"
Dec  1 03:14:46 np0005540697 NetworkManager[7183]: <info>  [1764576886.3484] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Dec  1 03:14:46 np0005540697 NetworkManager[7183]: <info>  [1764576886.3491] manager[0x5604f6e30070]: rfkill: Wi-Fi hardware radio set enabled
Dec  1 03:14:46 np0005540697 NetworkManager[7183]: <info>  [1764576886.3491] manager[0x5604f6e30070]: rfkill: WWAN hardware radio set enabled
Dec  1 03:14:46 np0005540697 NetworkManager[7183]: <info>  [1764576886.3536] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Dec  1 03:14:46 np0005540697 NetworkManager[7183]: <info>  [1764576886.3536] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Dec  1 03:14:46 np0005540697 NetworkManager[7183]: <info>  [1764576886.3537] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Dec  1 03:14:46 np0005540697 NetworkManager[7183]: <info>  [1764576886.3538] manager: Networking is enabled by state file
Dec  1 03:14:46 np0005540697 NetworkManager[7183]: <info>  [1764576886.3542] settings: Loaded settings plugin: keyfile (internal)
Dec  1 03:14:46 np0005540697 NetworkManager[7183]: <info>  [1764576886.3549] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Dec  1 03:14:46 np0005540697 NetworkManager[7183]: <info>  [1764576886.3589] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Dec  1 03:14:46 np0005540697 NetworkManager[7183]: <info>  [1764576886.3603] dhcp: init: Using DHCP client 'internal'
Dec  1 03:14:46 np0005540697 NetworkManager[7183]: <info>  [1764576886.3608] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Dec  1 03:14:46 np0005540697 NetworkManager[7183]: <info>  [1764576886.3616] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  1 03:14:46 np0005540697 NetworkManager[7183]: <info>  [1764576886.3627] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Dec  1 03:14:46 np0005540697 NetworkManager[7183]: <info>  [1764576886.3640] device (lo): Activation: starting connection 'lo' (dfe05a7d-1dbe-4572-b7fc-2c1528ca0986)
Dec  1 03:14:46 np0005540697 NetworkManager[7183]: <info>  [1764576886.3651] device (eth0): carrier: link connected
Dec  1 03:14:46 np0005540697 NetworkManager[7183]: <info>  [1764576886.3663] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Dec  1 03:14:46 np0005540697 NetworkManager[7183]: <info>  [1764576886.3671] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Dec  1 03:14:46 np0005540697 NetworkManager[7183]: <info>  [1764576886.3671] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec  1 03:14:46 np0005540697 NetworkManager[7183]: <info>  [1764576886.3684] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec  1 03:14:46 np0005540697 NetworkManager[7183]: <info>  [1764576886.3697] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec  1 03:14:46 np0005540697 NetworkManager[7183]: <info>  [1764576886.3707] device (eth1): carrier: link connected
Dec  1 03:14:46 np0005540697 NetworkManager[7183]: <info>  [1764576886.3715] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Dec  1 03:14:46 np0005540697 NetworkManager[7183]: <info>  [1764576886.3723] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (c03566d5-28ee-35b2-b3f7-ec229e142493) (indicated)
Dec  1 03:14:46 np0005540697 NetworkManager[7183]: <info>  [1764576886.3724] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec  1 03:14:46 np0005540697 NetworkManager[7183]: <info>  [1764576886.3733] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec  1 03:14:46 np0005540697 NetworkManager[7183]: <info>  [1764576886.3744] device (eth1): Activation: starting connection 'Wired connection 1' (c03566d5-28ee-35b2-b3f7-ec229e142493)
Dec  1 03:14:46 np0005540697 systemd[1]: Started Network Manager.
Dec  1 03:14:46 np0005540697 NetworkManager[7183]: <info>  [1764576886.3754] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Dec  1 03:14:46 np0005540697 NetworkManager[7183]: <info>  [1764576886.3761] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Dec  1 03:14:46 np0005540697 NetworkManager[7183]: <info>  [1764576886.3764] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Dec  1 03:14:46 np0005540697 NetworkManager[7183]: <info>  [1764576886.3767] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Dec  1 03:14:46 np0005540697 NetworkManager[7183]: <info>  [1764576886.3770] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec  1 03:14:46 np0005540697 NetworkManager[7183]: <info>  [1764576886.3775] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec  1 03:14:46 np0005540697 NetworkManager[7183]: <info>  [1764576886.3780] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec  1 03:14:46 np0005540697 NetworkManager[7183]: <info>  [1764576886.3783] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec  1 03:14:46 np0005540697 NetworkManager[7183]: <info>  [1764576886.3788] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Dec  1 03:14:46 np0005540697 NetworkManager[7183]: <info>  [1764576886.3799] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec  1 03:14:46 np0005540697 NetworkManager[7183]: <info>  [1764576886.3804] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec  1 03:14:46 np0005540697 NetworkManager[7183]: <info>  [1764576886.3818] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec  1 03:14:46 np0005540697 NetworkManager[7183]: <info>  [1764576886.3821] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Dec  1 03:14:46 np0005540697 NetworkManager[7183]: <info>  [1764576886.3845] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Dec  1 03:14:46 np0005540697 NetworkManager[7183]: <info>  [1764576886.3852] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Dec  1 03:14:46 np0005540697 NetworkManager[7183]: <info>  [1764576886.3860] device (lo): Activation: successful, device activated.
Dec  1 03:14:46 np0005540697 NetworkManager[7183]: <info>  [1764576886.3876] dhcp4 (eth0): state changed new lease, address=38.102.83.13
Dec  1 03:14:46 np0005540697 NetworkManager[7183]: <info>  [1764576886.3891] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Dec  1 03:14:46 np0005540697 systemd[1]: Starting Network Manager Wait Online...
Dec  1 03:14:46 np0005540697 NetworkManager[7183]: <info>  [1764576886.3969] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec  1 03:14:46 np0005540697 NetworkManager[7183]: <info>  [1764576886.3998] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec  1 03:14:46 np0005540697 NetworkManager[7183]: <info>  [1764576886.4002] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec  1 03:14:46 np0005540697 NetworkManager[7183]: <info>  [1764576886.4009] manager: NetworkManager state is now CONNECTED_SITE
Dec  1 03:14:46 np0005540697 NetworkManager[7183]: <info>  [1764576886.4016] device (eth0): Activation: successful, device activated.
Dec  1 03:14:46 np0005540697 NetworkManager[7183]: <info>  [1764576886.4025] manager: NetworkManager state is now CONNECTED_GLOBAL
Dec  1 03:14:46 np0005540697 python3[7263]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163ec2-ffbe-4322-d8ea-0000000000a7-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 03:14:56 np0005540697 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec  1 03:15:16 np0005540697 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec  1 03:15:31 np0005540697 NetworkManager[7183]: <info>  [1764576931.2504] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec  1 03:15:31 np0005540697 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec  1 03:15:31 np0005540697 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec  1 03:15:31 np0005540697 NetworkManager[7183]: <info>  [1764576931.2894] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec  1 03:15:31 np0005540697 NetworkManager[7183]: <info>  [1764576931.2899] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec  1 03:15:31 np0005540697 NetworkManager[7183]: <info>  [1764576931.2922] device (eth1): Activation: successful, device activated.
Dec  1 03:15:31 np0005540697 NetworkManager[7183]: <info>  [1764576931.2936] manager: startup complete
Dec  1 03:15:31 np0005540697 NetworkManager[7183]: <info>  [1764576931.2957] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Dec  1 03:15:31 np0005540697 NetworkManager[7183]: <warn>  [1764576931.2967] device (eth1): Activation: failed for connection 'Wired connection 1'
Dec  1 03:15:31 np0005540697 NetworkManager[7183]: <info>  [1764576931.2980] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Dec  1 03:15:31 np0005540697 systemd[1]: Finished Network Manager Wait Online.
Dec  1 03:15:31 np0005540697 NetworkManager[7183]: <info>  [1764576931.3070] dhcp4 (eth1): canceled DHCP transaction
Dec  1 03:15:31 np0005540697 NetworkManager[7183]: <info>  [1764576931.3071] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Dec  1 03:15:31 np0005540697 NetworkManager[7183]: <info>  [1764576931.3071] dhcp4 (eth1): state changed no lease
Dec  1 03:15:31 np0005540697 NetworkManager[7183]: <info>  [1764576931.3095] policy: auto-activating connection 'ci-private-network' (703de4c6-3c25-5c70-99c2-a604be2498bd)
Dec  1 03:15:31 np0005540697 NetworkManager[7183]: <info>  [1764576931.3103] device (eth1): Activation: starting connection 'ci-private-network' (703de4c6-3c25-5c70-99c2-a604be2498bd)
Dec  1 03:15:31 np0005540697 NetworkManager[7183]: <info>  [1764576931.3104] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  1 03:15:31 np0005540697 NetworkManager[7183]: <info>  [1764576931.3109] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  1 03:15:31 np0005540697 NetworkManager[7183]: <info>  [1764576931.3120] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  1 03:15:31 np0005540697 NetworkManager[7183]: <info>  [1764576931.3134] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  1 03:15:31 np0005540697 NetworkManager[7183]: <info>  [1764576931.3190] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  1 03:15:31 np0005540697 NetworkManager[7183]: <info>  [1764576931.3192] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  1 03:15:31 np0005540697 NetworkManager[7183]: <info>  [1764576931.3203] device (eth1): Activation: successful, device activated.
Dec  1 03:15:41 np0005540697 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec  1 03:15:41 np0005540697 python3[7370]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  1 03:15:42 np0005540697 python3[7443]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764576941.5739748-259-73960801819127/source _original_basename=tmpd80z2cps follow=False checksum=8992a421273e58fbbc18ad0b2f61a2973234edbc backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:15:48 np0005540697 systemd[4301]: Starting Mark boot as successful...
Dec  1 03:15:48 np0005540697 systemd[4301]: Finished Mark boot as successful.
Dec  1 03:16:42 np0005540697 systemd-logind[792]: Session 1 logged out. Waiting for processes to exit.
Dec  1 03:18:48 np0005540697 systemd[4301]: Created slice User Background Tasks Slice.
Dec  1 03:18:48 np0005540697 systemd[4301]: Starting Cleanup of User's Temporary Files and Directories...
Dec  1 03:18:48 np0005540697 systemd[4301]: Finished Cleanup of User's Temporary Files and Directories.
Dec  1 03:23:31 np0005540697 systemd-logind[792]: New session 3 of user zuul.
Dec  1 03:23:31 np0005540697 systemd[1]: Started Session 3 of User zuul.
Dec  1 03:23:31 np0005540697 python3[7520]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda#012 _uses_shell=True zuul_log_id=fa163ec2-ffbe-50c8-ae81-000000001cf4-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 03:23:32 np0005540697 python3[7548]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:23:32 np0005540697 python3[7575]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:23:32 np0005540697 python3[7601]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:23:32 np0005540697 python3[7627]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:23:33 np0005540697 python3[7653]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:23:33 np0005540697 python3[7731]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  1 03:23:34 np0005540697 python3[7804]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764577413.5860684-502-121252501812962/source _original_basename=tmp2i_84tr9 follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:23:35 np0005540697 python3[7854]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  1 03:23:35 np0005540697 systemd[1]: Reloading.
Dec  1 03:23:35 np0005540697 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 03:23:36 np0005540697 python3[7909]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Dec  1 03:23:37 np0005540697 python3[7935]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 03:23:37 np0005540697 python3[7963]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 03:23:37 np0005540697 python3[7991]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 03:23:38 np0005540697 python3[8019]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 03:23:38 np0005540697 python3[8046]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;#012 _uses_shell=True zuul_log_id=fa163ec2-ffbe-50c8-ae81-000000001cfb-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 03:23:39 np0005540697 python3[8076]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec  1 03:23:41 np0005540697 systemd[1]: session-3.scope: Deactivated successfully.
Dec  1 03:23:41 np0005540697 systemd[1]: session-3.scope: Consumed 4.437s CPU time.
Dec  1 03:23:41 np0005540697 systemd-logind[792]: Session 3 logged out. Waiting for processes to exit.
Dec  1 03:23:41 np0005540697 systemd-logind[792]: Removed session 3.
Dec  1 03:23:42 np0005540697 systemd-logind[792]: New session 4 of user zuul.
Dec  1 03:23:42 np0005540697 systemd[1]: Started Session 4 of User zuul.
Dec  1 03:23:43 np0005540697 python3[8110]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec  1 03:23:58 np0005540697 kernel: SELinux:  Converting 385 SID table entries...
Dec  1 03:23:58 np0005540697 kernel: SELinux:  policy capability network_peer_controls=1
Dec  1 03:23:58 np0005540697 kernel: SELinux:  policy capability open_perms=1
Dec  1 03:23:58 np0005540697 kernel: SELinux:  policy capability extended_socket_class=1
Dec  1 03:23:58 np0005540697 kernel: SELinux:  policy capability always_check_network=0
Dec  1 03:23:58 np0005540697 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  1 03:23:58 np0005540697 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  1 03:23:58 np0005540697 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  1 03:24:08 np0005540697 kernel: SELinux:  Converting 385 SID table entries...
Dec  1 03:24:08 np0005540697 kernel: SELinux:  policy capability network_peer_controls=1
Dec  1 03:24:08 np0005540697 kernel: SELinux:  policy capability open_perms=1
Dec  1 03:24:08 np0005540697 kernel: SELinux:  policy capability extended_socket_class=1
Dec  1 03:24:08 np0005540697 kernel: SELinux:  policy capability always_check_network=0
Dec  1 03:24:08 np0005540697 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  1 03:24:08 np0005540697 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  1 03:24:08 np0005540697 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  1 03:24:19 np0005540697 kernel: SELinux:  Converting 385 SID table entries...
Dec  1 03:24:19 np0005540697 kernel: SELinux:  policy capability network_peer_controls=1
Dec  1 03:24:19 np0005540697 kernel: SELinux:  policy capability open_perms=1
Dec  1 03:24:19 np0005540697 kernel: SELinux:  policy capability extended_socket_class=1
Dec  1 03:24:19 np0005540697 kernel: SELinux:  policy capability always_check_network=0
Dec  1 03:24:19 np0005540697 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  1 03:24:19 np0005540697 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  1 03:24:19 np0005540697 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  1 03:24:20 np0005540697 setsebool[8178]: The virt_use_nfs policy boolean was changed to 1 by root
Dec  1 03:24:20 np0005540697 setsebool[8178]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Dec  1 03:24:33 np0005540697 kernel: SELinux:  Converting 388 SID table entries...
Dec  1 03:24:33 np0005540697 kernel: SELinux:  policy capability network_peer_controls=1
Dec  1 03:24:33 np0005540697 kernel: SELinux:  policy capability open_perms=1
Dec  1 03:24:33 np0005540697 kernel: SELinux:  policy capability extended_socket_class=1
Dec  1 03:24:33 np0005540697 kernel: SELinux:  policy capability always_check_network=0
Dec  1 03:24:33 np0005540697 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  1 03:24:33 np0005540697 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  1 03:24:33 np0005540697 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  1 03:24:51 np0005540697 dbus-broker-launch[774]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Dec  1 03:24:51 np0005540697 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  1 03:24:51 np0005540697 systemd[1]: Starting man-db-cache-update.service...
Dec  1 03:24:51 np0005540697 systemd[1]: Reloading.
Dec  1 03:24:51 np0005540697 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 03:24:51 np0005540697 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  1 03:24:54 np0005540697 python3[10115]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"#012 _uses_shell=True zuul_log_id=fa163ec2-ffbe-daaf-991e-00000000000a-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 03:24:56 np0005540697 kernel: evm: overlay not supported
Dec  1 03:24:57 np0005540697 systemd[4301]: Starting D-Bus User Message Bus...
Dec  1 03:24:57 np0005540697 dbus-broker-launch[12023]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Dec  1 03:24:57 np0005540697 dbus-broker-launch[12023]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Dec  1 03:24:57 np0005540697 systemd[4301]: Started D-Bus User Message Bus.
Dec  1 03:24:57 np0005540697 dbus-broker-lau[12023]: Ready
Dec  1 03:24:57 np0005540697 systemd[4301]: selinux: avc:  op=load_policy lsm=selinux seqno=6 res=1
Dec  1 03:24:57 np0005540697 systemd[4301]: Created slice Slice /user.
Dec  1 03:24:57 np0005540697 systemd[4301]: podman-11294.scope: unit configures an IP firewall, but not running as root.
Dec  1 03:24:57 np0005540697 systemd[4301]: (This warning is only shown for the first unit using IP firewalling.)
Dec  1 03:24:57 np0005540697 systemd[4301]: Started podman-11294.scope.
Dec  1 03:24:57 np0005540697 systemd[4301]: Started podman-pause-e3aecd24.scope.
Dec  1 03:24:57 np0005540697 python3[12593]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]#012location = "38.102.83.30:5001"#012insecure = true path=/etc/containers/registries.conf block=[[registry]]#012location = "38.102.83.30:5001"#012insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:24:57 np0005540697 python3[12593]: ansible-ansible.builtin.blockinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Dec  1 03:24:58 np0005540697 systemd[1]: session-4.scope: Deactivated successfully.
Dec  1 03:24:58 np0005540697 systemd[1]: session-4.scope: Consumed 1min 5.496s CPU time.
Dec  1 03:24:58 np0005540697 systemd-logind[792]: Session 4 logged out. Waiting for processes to exit.
Dec  1 03:24:58 np0005540697 systemd-logind[792]: Removed session 4.
Dec  1 03:25:22 np0005540697 systemd-logind[792]: New session 5 of user zuul.
Dec  1 03:25:22 np0005540697 systemd[1]: Started Session 5 of User zuul.
Dec  1 03:25:23 np0005540697 python3[21290]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIr1Vx9qUeraoUJ4GEUeY9QVYp8dwtElz6r6XTTlHPkaJRY9UYBtGBmXqyUjMGB42mwxc8xVzpHSytoNMruE53o= zuul@np0005540696.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  1 03:25:23 np0005540697 python3[21408]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIr1Vx9qUeraoUJ4GEUeY9QVYp8dwtElz6r6XTTlHPkaJRY9UYBtGBmXqyUjMGB42mwxc8xVzpHSytoNMruE53o= zuul@np0005540696.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  1 03:25:24 np0005540697 python3[21701]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005540697.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Dec  1 03:25:25 np0005540697 python3[21977]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIr1Vx9qUeraoUJ4GEUeY9QVYp8dwtElz6r6XTTlHPkaJRY9UYBtGBmXqyUjMGB42mwxc8xVzpHSytoNMruE53o= zuul@np0005540696.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  1 03:25:25 np0005540697 python3[22217]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  1 03:25:26 np0005540697 python3[22453]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764577525.2745004-135-50213520111123/source _original_basename=tmpzyo2xfg6 follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:25:26 np0005540697 python3[22703]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Dec  1 03:25:26 np0005540697 systemd[1]: Starting Hostname Service...
Dec  1 03:25:26 np0005540697 systemd[1]: Started Hostname Service.
Dec  1 03:25:26 np0005540697 systemd-hostnamed[22784]: Changed pretty hostname to 'compute-0'
Dec  1 03:25:26 np0005540697 systemd-hostnamed[22784]: Hostname set to <compute-0> (static)
Dec  1 03:25:26 np0005540697 NetworkManager[7183]: <info>  [1764577526.9839] hostname: static hostname changed from "np0005540697.novalocal" to "compute-0"
Dec  1 03:25:27 np0005540697 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec  1 03:25:27 np0005540697 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec  1 03:25:27 np0005540697 systemd[1]: session-5.scope: Deactivated successfully.
Dec  1 03:25:27 np0005540697 systemd[1]: session-5.scope: Consumed 2.430s CPU time.
Dec  1 03:25:27 np0005540697 systemd-logind[792]: Session 5 logged out. Waiting for processes to exit.
Dec  1 03:25:27 np0005540697 systemd-logind[792]: Removed session 5.
Dec  1 03:25:37 np0005540697 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec  1 03:25:51 np0005540697 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  1 03:25:51 np0005540697 systemd[1]: Finished man-db-cache-update.service.
Dec  1 03:25:51 np0005540697 systemd[1]: man-db-cache-update.service: Consumed 1min 9.817s CPU time.
Dec  1 03:25:51 np0005540697 systemd[1]: run-rb42df22a0dad491e9233bdabb98f44d7.service: Deactivated successfully.
Dec  1 03:25:57 np0005540697 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec  1 03:27:48 np0005540697 systemd[1]: Starting Cleanup of Temporary Directories...
Dec  1 03:27:48 np0005540697 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Dec  1 03:27:48 np0005540697 systemd[1]: Finished Cleanup of Temporary Directories.
Dec  1 03:27:48 np0005540697 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Dec  1 03:30:15 np0005540697 systemd-logind[792]: New session 6 of user zuul.
Dec  1 03:30:15 np0005540697 systemd[1]: Started Session 6 of User zuul.
Dec  1 03:30:16 np0005540697 python3[30068]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 03:30:17 np0005540697 python3[30184]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  1 03:30:18 np0005540697 python3[30257]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764577817.3019812-33685-206827130768743/source mode=0755 _original_basename=delorean.repo follow=False checksum=39c885eb875fd03e010d1b0454241c26b121dfb2 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:30:18 np0005540697 python3[30283]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  1 03:30:19 np0005540697 python3[30356]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764577817.3019812-33685-206827130768743/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=0bdbb813b840548359ae77c28d76ca272ccaf31b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:30:19 np0005540697 python3[30382]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  1 03:30:19 np0005540697 python3[30455]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764577817.3019812-33685-206827130768743/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:30:20 np0005540697 python3[30481]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  1 03:30:20 np0005540697 python3[30554]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764577817.3019812-33685-206827130768743/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:30:20 np0005540697 python3[30580]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  1 03:30:21 np0005540697 python3[30653]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764577817.3019812-33685-206827130768743/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:30:21 np0005540697 python3[30679]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  1 03:30:21 np0005540697 python3[30752]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764577817.3019812-33685-206827130768743/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:30:22 np0005540697 python3[30778]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  1 03:30:22 np0005540697 python3[30851]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764577817.3019812-33685-206827130768743/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=6e18e2038d54303b4926db53c0b6cced515a9151 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:33:05 np0005540697 python3[30911]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 03:38:05 np0005540697 systemd[1]: session-6.scope: Deactivated successfully.
Dec  1 03:38:05 np0005540697 systemd[1]: session-6.scope: Consumed 5.833s CPU time.
Dec  1 03:38:05 np0005540697 systemd-logind[792]: Session 6 logged out. Waiting for processes to exit.
Dec  1 03:38:05 np0005540697 systemd-logind[792]: Removed session 6.
Dec  1 03:46:12 np0005540697 systemd-logind[792]: New session 7 of user zuul.
Dec  1 03:46:12 np0005540697 systemd[1]: Started Session 7 of User zuul.
Dec  1 03:46:14 np0005540697 python3.9[31072]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 03:46:15 np0005540697 python3.9[31253]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 03:46:24 np0005540697 systemd[1]: session-7.scope: Deactivated successfully.
Dec  1 03:46:24 np0005540697 systemd[1]: session-7.scope: Consumed 8.505s CPU time.
Dec  1 03:46:24 np0005540697 systemd-logind[792]: Session 7 logged out. Waiting for processes to exit.
Dec  1 03:46:24 np0005540697 systemd-logind[792]: Removed session 7.
Dec  1 03:46:30 np0005540697 systemd-logind[792]: New session 8 of user zuul.
Dec  1 03:46:30 np0005540697 systemd[1]: Started Session 8 of User zuul.
Dec  1 03:46:31 np0005540697 python3.9[31463]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 03:46:32 np0005540697 systemd[1]: session-8.scope: Deactivated successfully.
Dec  1 03:46:32 np0005540697 systemd-logind[792]: Session 8 logged out. Waiting for processes to exit.
Dec  1 03:46:32 np0005540697 systemd-logind[792]: Removed session 8.
Dec  1 03:46:48 np0005540697 systemd-logind[792]: New session 9 of user zuul.
Dec  1 03:46:48 np0005540697 systemd[1]: Started Session 9 of User zuul.
Dec  1 03:46:49 np0005540697 python3.9[31643]: ansible-ansible.legacy.ping Invoked with data=pong
Dec  1 03:46:50 np0005540697 python3.9[31817]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 03:46:51 np0005540697 python3.9[31969]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 03:46:52 np0005540697 python3.9[32122]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 03:46:53 np0005540697 python3.9[32274]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:46:53 np0005540697 python3.9[32426]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 03:46:54 np0005540697 python3.9[32549]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1764578813.391132-73-57794815011662/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:46:55 np0005540697 python3.9[32701]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 03:46:56 np0005540697 python3.9[32857]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 03:46:56 np0005540697 python3.9[33009]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 03:46:57 np0005540697 python3.9[33159]: ansible-ansible.builtin.service_facts Invoked
Dec  1 03:47:01 np0005540697 python3.9[33412]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:47:02 np0005540697 python3.9[33562]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 03:47:03 np0005540697 python3.9[33716]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 03:47:04 np0005540697 python3.9[33874]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  1 03:47:05 np0005540697 python3.9[33958]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  1 03:47:47 np0005540697 systemd[1]: Reloading.
Dec  1 03:47:47 np0005540697 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 03:47:47 np0005540697 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Dec  1 03:47:47 np0005540697 systemd[1]: Reloading.
Dec  1 03:47:47 np0005540697 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 03:47:47 np0005540697 systemd[1]: Starting dnf makecache...
Dec  1 03:47:47 np0005540697 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Dec  1 03:47:48 np0005540697 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Dec  1 03:47:48 np0005540697 systemd[1]: Reloading.
Dec  1 03:47:48 np0005540697 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 03:47:48 np0005540697 dnf[34204]: Failed determining last makecache time.
Dec  1 03:47:48 np0005540697 dnf[34204]: delorean-openstack-barbican-42b4c41831408a8e323 142 kB/s | 3.0 kB     00:00
Dec  1 03:47:48 np0005540697 dnf[34204]: delorean-python-glean-10df0bd91b9bc5c9fd9cc02d7 191 kB/s | 3.0 kB     00:00
Dec  1 03:47:48 np0005540697 dnf[34204]: delorean-openstack-cinder-1c00d6490d88e436f26ef 181 kB/s | 3.0 kB     00:00
Dec  1 03:47:48 np0005540697 systemd[1]: Listening on LVM2 poll daemon socket.
Dec  1 03:47:48 np0005540697 dnf[34204]: delorean-python-stevedore-c4acc5639fd2329372142 201 kB/s | 3.0 kB     00:00
Dec  1 03:47:48 np0005540697 dnf[34204]: delorean-python-cloudkitty-tests-tempest-2c80f8 192 kB/s | 3.0 kB     00:00
Dec  1 03:47:48 np0005540697 dnf[34204]: delorean-os-net-config-d0cedbdb788d43e5c7551df5 196 kB/s | 3.0 kB     00:00
Dec  1 03:47:48 np0005540697 dnf[34204]: delorean-openstack-nova-6f8decf0b4f1aa2e96292b6 193 kB/s | 3.0 kB     00:00
Dec  1 03:47:48 np0005540697 dnf[34204]: delorean-python-designate-tests-tempest-347fdbc 188 kB/s | 3.0 kB     00:00
Dec  1 03:47:48 np0005540697 dnf[34204]: delorean-openstack-glance-1fd12c29b339f30fe823e 188 kB/s | 3.0 kB     00:00
Dec  1 03:47:48 np0005540697 dnf[34204]: delorean-openstack-keystone-e4b40af0ae3698fbbbb 198 kB/s | 3.0 kB     00:00
Dec  1 03:47:48 np0005540697 dnf[34204]: delorean-openstack-manila-3c01b7181572c95dac462 190 kB/s | 3.0 kB     00:00
Dec  1 03:47:48 np0005540697 dnf[34204]: delorean-python-whitebox-neutron-tests-tempest- 189 kB/s | 3.0 kB     00:00
Dec  1 03:47:48 np0005540697 dnf[34204]: delorean-openstack-octavia-ba397f07a7331190208c 180 kB/s | 3.0 kB     00:00
Dec  1 03:47:48 np0005540697 dnf[34204]: delorean-openstack-watcher-c014f81a8647287f6dcc 174 kB/s | 3.0 kB     00:00
Dec  1 03:47:48 np0005540697 dbus-broker-launch[767]: Noticed file-system modification, trigger reload.
Dec  1 03:47:48 np0005540697 dnf[34204]: delorean-ansible-config_template-5ccaa22121a7ff 183 kB/s | 3.0 kB     00:00
Dec  1 03:47:48 np0005540697 dbus-broker-launch[767]: Noticed file-system modification, trigger reload.
Dec  1 03:47:48 np0005540697 dbus-broker-launch[767]: Noticed file-system modification, trigger reload.
Dec  1 03:47:48 np0005540697 dnf[34204]: delorean-puppet-ceph-7352068d7b8c84ded636ab3158 153 kB/s | 3.0 kB     00:00
Dec  1 03:47:48 np0005540697 dnf[34204]: delorean-openstack-swift-dc98a8463506ac520c469a 187 kB/s | 3.0 kB     00:00
Dec  1 03:47:48 np0005540697 dnf[34204]: delorean-python-tempestconf-8515371b7cceebd4282 189 kB/s | 3.0 kB     00:00
Dec  1 03:47:48 np0005540697 dnf[34204]: delorean-openstack-heat-ui-013accbfd179753bc3f0 149 kB/s | 3.0 kB     00:00
Dec  1 03:47:48 np0005540697 dnf[34204]: CentOS Stream 9 - BaseOS                         81 kB/s | 7.3 kB     00:00
Dec  1 03:47:48 np0005540697 dnf[34204]: CentOS Stream 9 - AppStream                      46 kB/s | 7.4 kB     00:00
Dec  1 03:47:49 np0005540697 dnf[34204]: CentOS Stream 9 - CRB                            76 kB/s | 7.2 kB     00:00
Dec  1 03:47:49 np0005540697 dnf[34204]: CentOS Stream 9 - Extras packages                72 kB/s | 8.3 kB     00:00
Dec  1 03:47:49 np0005540697 dnf[34204]: dlrn-antelope-testing                           164 kB/s | 3.0 kB     00:00
Dec  1 03:47:49 np0005540697 dnf[34204]: dlrn-antelope-build-deps                        185 kB/s | 3.0 kB     00:00
Dec  1 03:47:49 np0005540697 dnf[34204]: centos9-rabbitmq                                133 kB/s | 3.0 kB     00:00
Dec  1 03:47:49 np0005540697 dnf[34204]: centos9-storage                                 129 kB/s | 3.0 kB     00:00
Dec  1 03:47:49 np0005540697 dnf[34204]: centos9-opstools                                 21 kB/s | 3.0 kB     00:00
Dec  1 03:47:49 np0005540697 dnf[34204]: NFV SIG OpenvSwitch                             136 kB/s | 3.0 kB     00:00
Dec  1 03:47:49 np0005540697 dnf[34204]: repo-setup-centos-appstream                     209 kB/s | 4.4 kB     00:00
Dec  1 03:47:49 np0005540697 dnf[34204]: repo-setup-centos-baseos                        162 kB/s | 3.9 kB     00:00
Dec  1 03:47:49 np0005540697 dnf[34204]: repo-setup-centos-highavailability              169 kB/s | 3.9 kB     00:00
Dec  1 03:47:49 np0005540697 dnf[34204]: repo-setup-centos-powertools                    190 kB/s | 4.3 kB     00:00
Dec  1 03:47:49 np0005540697 dnf[34204]: Extra Packages for Enterprise Linux 9 - x86_64  260 kB/s |  30 kB     00:00
Dec  1 03:47:50 np0005540697 dnf[34204]: Metadata cache created.
Dec  1 03:47:50 np0005540697 systemd[1]: dnf-makecache.service: Deactivated successfully.
Dec  1 03:47:50 np0005540697 systemd[1]: Finished dnf makecache.
Dec  1 03:47:50 np0005540697 systemd[1]: dnf-makecache.service: Consumed 1.781s CPU time.
Dec  1 03:48:51 np0005540697 kernel: SELinux:  Converting 2718 SID table entries...
Dec  1 03:48:51 np0005540697 kernel: SELinux:  policy capability network_peer_controls=1
Dec  1 03:48:51 np0005540697 kernel: SELinux:  policy capability open_perms=1
Dec  1 03:48:51 np0005540697 kernel: SELinux:  policy capability extended_socket_class=1
Dec  1 03:48:51 np0005540697 kernel: SELinux:  policy capability always_check_network=0
Dec  1 03:48:51 np0005540697 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  1 03:48:51 np0005540697 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  1 03:48:51 np0005540697 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  1 03:48:51 np0005540697 dbus-broker-launch[774]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Dec  1 03:48:51 np0005540697 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  1 03:48:51 np0005540697 systemd[1]: Starting man-db-cache-update.service...
Dec  1 03:48:51 np0005540697 systemd[1]: Reloading.
Dec  1 03:48:51 np0005540697 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 03:48:51 np0005540697 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  1 03:48:52 np0005540697 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  1 03:48:52 np0005540697 systemd[1]: Finished man-db-cache-update.service.
Dec  1 03:48:52 np0005540697 systemd[1]: man-db-cache-update.service: Consumed 1.183s CPU time.
Dec  1 03:48:52 np0005540697 systemd[1]: run-r6c319a0ab3c84d3ca095851a1cec2be7.service: Deactivated successfully.
Dec  1 03:48:52 np0005540697 python3.9[35511]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 03:48:55 np0005540697 python3.9[35792]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Dec  1 03:48:56 np0005540697 python3.9[35944]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Dec  1 03:48:58 np0005540697 python3.9[36097]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:48:59 np0005540697 python3.9[36249]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Dec  1 03:49:01 np0005540697 python3.9[36401]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 03:49:01 np0005540697 python3.9[36553]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 03:49:02 np0005540697 python3.9[36676]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764578941.2187214-236-146514220436480/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=bbfb8d6cd9f3cb39afb14833aa4ef759cc4763ae backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:49:05 np0005540697 python3.9[36828]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 03:49:07 np0005540697 python3.9[36980]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 03:49:08 np0005540697 python3.9[37133]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:49:09 np0005540697 python3.9[37285]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Dec  1 03:49:09 np0005540697 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  1 03:49:09 np0005540697 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  1 03:49:10 np0005540697 python3.9[37439]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec  1 03:49:11 np0005540697 python3.9[37597]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec  1 03:49:12 np0005540697 python3.9[37757]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Dec  1 03:49:13 np0005540697 python3.9[37910]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec  1 03:49:14 np0005540697 python3.9[38068]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Dec  1 03:49:15 np0005540697 python3.9[38220]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  1 03:49:18 np0005540697 python3.9[38373]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 03:49:18 np0005540697 python3.9[38525]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 03:49:19 np0005540697 python3.9[38648]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764578958.352452-355-119059126249539/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  1 03:49:20 np0005540697 python3.9[38800]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  1 03:49:20 np0005540697 systemd[1]: Starting Load Kernel Modules...
Dec  1 03:49:20 np0005540697 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Dec  1 03:49:20 np0005540697 kernel: Bridge firewalling registered
Dec  1 03:49:20 np0005540697 systemd-modules-load[38804]: Inserted module 'br_netfilter'
Dec  1 03:49:20 np0005540697 systemd[1]: Finished Load Kernel Modules.
Dec  1 03:49:21 np0005540697 python3.9[38959]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 03:49:22 np0005540697 python3.9[39082]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764578961.159734-378-272765263915701/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  1 03:49:23 np0005540697 python3.9[39234]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  1 03:49:26 np0005540697 dbus-broker-launch[767]: Noticed file-system modification, trigger reload.
Dec  1 03:49:27 np0005540697 dbus-broker-launch[767]: Noticed file-system modification, trigger reload.
Dec  1 03:49:27 np0005540697 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  1 03:49:27 np0005540697 systemd[1]: Starting man-db-cache-update.service...
Dec  1 03:49:27 np0005540697 systemd[1]: Reloading.
Dec  1 03:49:27 np0005540697 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 03:49:27 np0005540697 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  1 03:49:29 np0005540697 python3.9[40295]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 03:49:30 np0005540697 python3.9[41254]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Dec  1 03:49:30 np0005540697 python3.9[41915]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 03:49:31 np0005540697 python3.9[42758]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 03:49:31 np0005540697 systemd[1]: Starting Dynamic System Tuning Daemon...
Dec  1 03:49:31 np0005540697 systemd[1]: Starting Authorization Manager...
Dec  1 03:49:31 np0005540697 systemd[1]: Started Dynamic System Tuning Daemon.
Dec  1 03:49:31 np0005540697 polkitd[43534]: Started polkitd version 0.117
Dec  1 03:49:32 np0005540697 systemd[1]: Started Authorization Manager.
Dec  1 03:49:32 np0005540697 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  1 03:49:32 np0005540697 systemd[1]: Finished man-db-cache-update.service.
Dec  1 03:49:32 np0005540697 systemd[1]: man-db-cache-update.service: Consumed 6.062s CPU time.
Dec  1 03:49:32 np0005540697 systemd[1]: run-r816cb79b7dfb4090814d49cc64266283.service: Deactivated successfully.
Dec  1 03:49:33 np0005540697 python3.9[43818]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 03:49:33 np0005540697 systemd[1]: Stopping Dynamic System Tuning Daemon...
Dec  1 03:49:33 np0005540697 systemd[1]: tuned.service: Deactivated successfully.
Dec  1 03:49:33 np0005540697 systemd[1]: Stopped Dynamic System Tuning Daemon.
Dec  1 03:49:33 np0005540697 systemd[1]: Starting Dynamic System Tuning Daemon...
Dec  1 03:49:33 np0005540697 systemd[1]: Started Dynamic System Tuning Daemon.
Dec  1 03:49:34 np0005540697 python3.9[43980]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Dec  1 03:49:36 np0005540697 python3.9[44132]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 03:49:37 np0005540697 systemd[1]: Reloading.
Dec  1 03:49:37 np0005540697 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 03:49:38 np0005540697 python3.9[44322]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 03:49:38 np0005540697 systemd[1]: Reloading.
Dec  1 03:49:38 np0005540697 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 03:49:40 np0005540697 python3.9[44512]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 03:49:40 np0005540697 python3.9[44665]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 03:49:40 np0005540697 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Dec  1 03:49:41 np0005540697 python3.9[44818]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 03:49:43 np0005540697 python3.9[44980]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 03:49:44 np0005540697 python3.9[45133]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  1 03:49:44 np0005540697 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Dec  1 03:49:44 np0005540697 systemd[1]: Stopped Apply Kernel Variables.
Dec  1 03:49:44 np0005540697 systemd[1]: Stopping Apply Kernel Variables...
Dec  1 03:49:44 np0005540697 systemd[1]: Starting Apply Kernel Variables...
Dec  1 03:49:44 np0005540697 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Dec  1 03:49:44 np0005540697 systemd[1]: Finished Apply Kernel Variables.
Dec  1 03:49:45 np0005540697 systemd[1]: session-9.scope: Deactivated successfully.
Dec  1 03:49:45 np0005540697 systemd[1]: session-9.scope: Consumed 2min 13.604s CPU time.
Dec  1 03:49:45 np0005540697 systemd-logind[792]: Session 9 logged out. Waiting for processes to exit.
Dec  1 03:49:45 np0005540697 systemd-logind[792]: Removed session 9.
Dec  1 03:49:51 np0005540697 systemd-logind[792]: New session 10 of user zuul.
Dec  1 03:49:51 np0005540697 systemd[1]: Started Session 10 of User zuul.
Dec  1 03:49:52 np0005540697 python3.9[45316]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 03:49:53 np0005540697 python3.9[45470]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 03:49:54 np0005540697 python3.9[45626]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 03:49:55 np0005540697 python3.9[45777]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 03:49:56 np0005540697 python3.9[45933]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  1 03:49:57 np0005540697 python3.9[46017]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  1 03:49:59 np0005540697 python3.9[46170]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  1 03:50:01 np0005540697 python3.9[46341]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:50:01 np0005540697 python3.9[46493]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 03:50:01 np0005540697 systemd[1]: var-lib-containers-storage-overlay-compat3780145186-merged.mount: Deactivated successfully.
Dec  1 03:50:02 np0005540697 podman[46494]: 2025-12-01 08:50:02.008785528 +0000 UTC m=+0.061101029 system refresh
Dec  1 03:50:02 np0005540697 python3.9[46656]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 03:50:02 np0005540697 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 03:50:03 np0005540697 python3.9[46779]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764579002.2241821-109-250482779244984/.source.json follow=False _original_basename=podman_network_config.j2 checksum=3cb4228292e43779c3a6fcc5d16305eebe13163b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:50:04 np0005540697 python3.9[46931]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 03:50:04 np0005540697 python3.9[47054]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764579003.84051-124-182388441913725/.source.conf follow=False _original_basename=registries.conf.j2 checksum=b723c254c5347521a0bd9978182359a7d08823fc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  1 03:50:05 np0005540697 python3.9[47206]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec  1 03:50:06 np0005540697 python3.9[47358]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec  1 03:50:07 np0005540697 python3.9[47510]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec  1 03:50:07 np0005540697 python3.9[47662]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec  1 03:50:08 np0005540697 python3.9[47812]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 03:50:09 np0005540697 python3.9[47966]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec  1 03:50:11 np0005540697 python3.9[48119]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openstack-network-scripts'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec  1 03:50:15 np0005540697 python3.9[48279]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['podman', 'buildah'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec  1 03:50:17 np0005540697 python3.9[48432]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['tuned', 'tuned-profiles-cpu-partitioning'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec  1 03:50:20 np0005540697 python3.9[48585]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['NetworkManager-ovs'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec  1 03:50:22 np0005540697 python3.9[48741]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['os-net-config'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec  1 03:50:27 np0005540697 python3.9[48911]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openssh-server'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec  1 03:50:30 np0005540697 python3.9[49064]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec  1 03:50:45 np0005540697 python3.9[49402]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['iscsi-initiator-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec  1 03:50:47 np0005540697 python3.9[49558]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:50:48 np0005540697 python3.9[49733]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 03:50:49 np0005540697 python3.9[49856]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1764579048.195509-272-246136715157408/.source.json _original_basename=.h37aqbcv follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:50:50 np0005540697 python3.9[50008]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Dec  1 03:50:50 np0005540697 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 03:50:53 np0005540697 systemd[1]: var-lib-containers-storage-overlay-compat3937067776-lower\x2dmapped.mount: Deactivated successfully.
Dec  1 03:50:56 np0005540697 podman[50020]: 2025-12-01 08:50:56.83354942 +0000 UTC m=+6.275669590 image pull 3a37a52861b2e44ebd2a63ca2589a7c9d8e4119e5feace9d19c6312ed9b8421c quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Dec  1 03:50:56 np0005540697 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 03:50:56 np0005540697 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 03:50:56 np0005540697 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 03:50:57 np0005540697 python3.9[50322]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Dec  1 03:50:57 np0005540697 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 03:51:09 np0005540697 podman[50334]: 2025-12-01 08:51:09.168513147 +0000 UTC m=+11.195131927 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  1 03:51:09 np0005540697 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 03:51:09 np0005540697 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 03:51:09 np0005540697 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 03:51:10 np0005540697 python3.9[50637]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Dec  1 03:51:10 np0005540697 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 03:51:12 np0005540697 podman[50649]: 2025-12-01 08:51:12.018618393 +0000 UTC m=+1.553571418 image pull 9af6aa52ee187025bc25565b66d3eefb486acac26f9281e33f4cce76a40d21f7 quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Dec  1 03:51:12 np0005540697 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 03:51:12 np0005540697 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 03:51:12 np0005540697 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 03:51:13 np0005540697 python3.9[50887]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Dec  1 03:51:13 np0005540697 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 03:51:27 np0005540697 podman[50900]: 2025-12-01 08:51:27.066599335 +0000 UTC m=+13.914308436 image pull 5571c1b2140c835f70406e4553b3b44135b9c9b4eb673345cbd571460c5d59a3 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Dec  1 03:51:27 np0005540697 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 03:51:27 np0005540697 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 03:51:27 np0005540697 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 03:51:28 np0005540697 python3.9[51172]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Dec  1 03:51:28 np0005540697 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 03:51:45 np0005540697 podman[51185]: 2025-12-01 08:51:45.13290483 +0000 UTC m=+16.781721705 image pull b1b6d71b432c07886b3bae74df4dc9841d1f26407d5f96d6c1e400b0154d9a3d quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested
Dec  1 03:51:45 np0005540697 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 03:51:45 np0005540697 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 03:51:45 np0005540697 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 03:51:46 np0005540697 python3.9[51502]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/prometheus/node-exporter:v1.5.0 tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Dec  1 03:51:46 np0005540697 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 03:51:49 np0005540697 podman[51514]: 2025-12-01 08:51:49.643548614 +0000 UTC m=+3.411174484 image pull 0da6a335fe1356545476b749c68f022c897de3a2139e8f0054f6937349ee2b83 quay.io/prometheus/node-exporter:v1.5.0
Dec  1 03:51:49 np0005540697 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 03:51:49 np0005540697 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 03:51:49 np0005540697 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 03:51:50 np0005540697 python3.9[51788]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Dec  1 03:51:50 np0005540697 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 03:51:54 np0005540697 podman[51801]: 2025-12-01 08:51:54.435252617 +0000 UTC m=+3.560923410 image pull 24d4416455a3caf43088be1a1fdcd72d9680ad5e64ac2b338cb2cc50d15f5acc quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified
Dec  1 03:51:54 np0005540697 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 03:51:54 np0005540697 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 03:51:54 np0005540697 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 03:51:55 np0005540697 python3.9[52059]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/sustainable_computing_io/kepler:release-0.7.12 tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Dec  1 03:52:05 np0005540697 podman[52071]: 2025-12-01 08:52:05.379357487 +0000 UTC m=+9.937172643 image pull ed61e3ea3188391c18595d8ceada2a5a01f0ece915c62fde355798735b5208d7 quay.io/sustainable_computing_io/kepler:release-0.7.12
Dec  1 03:52:05 np0005540697 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 03:52:05 np0005540697 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 03:52:05 np0005540697 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 03:52:06 np0005540697 systemd[1]: session-10.scope: Deactivated successfully.
Dec  1 03:52:06 np0005540697 systemd[1]: session-10.scope: Consumed 2min 40.738s CPU time.
Dec  1 03:52:06 np0005540697 systemd-logind[792]: Session 10 logged out. Waiting for processes to exit.
Dec  1 03:52:06 np0005540697 systemd-logind[792]: Removed session 10.
Dec  1 03:52:11 np0005540697 systemd-logind[792]: New session 11 of user zuul.
Dec  1 03:52:11 np0005540697 systemd[1]: Started Session 11 of User zuul.
Dec  1 03:52:13 np0005540697 python3.9[52472]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 03:52:14 np0005540697 python3.9[52628]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Dec  1 03:52:15 np0005540697 python3.9[52781]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec  1 03:52:16 np0005540697 python3.9[52939]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec  1 03:52:19 np0005540697 python3.9[53099]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  1 03:52:20 np0005540697 python3.9[53183]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec  1 03:52:23 np0005540697 python3.9[53345]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  1 03:52:35 np0005540697 kernel: SELinux:  Converting 2731 SID table entries...
Dec  1 03:52:35 np0005540697 kernel: SELinux:  policy capability network_peer_controls=1
Dec  1 03:52:35 np0005540697 kernel: SELinux:  policy capability open_perms=1
Dec  1 03:52:35 np0005540697 kernel: SELinux:  policy capability extended_socket_class=1
Dec  1 03:52:35 np0005540697 kernel: SELinux:  policy capability always_check_network=0
Dec  1 03:52:35 np0005540697 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  1 03:52:35 np0005540697 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  1 03:52:35 np0005540697 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  1 03:52:35 np0005540697 dbus-broker-launch[774]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Dec  1 03:52:35 np0005540697 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Dec  1 03:52:37 np0005540697 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  1 03:52:37 np0005540697 systemd[1]: Starting man-db-cache-update.service...
Dec  1 03:52:37 np0005540697 systemd[1]: Reloading.
Dec  1 03:52:37 np0005540697 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 03:52:37 np0005540697 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 03:52:37 np0005540697 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  1 03:52:38 np0005540697 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  1 03:52:38 np0005540697 systemd[1]: Finished man-db-cache-update.service.
Dec  1 03:52:38 np0005540697 systemd[1]: man-db-cache-update.service: Consumed 1.229s CPU time.
Dec  1 03:52:38 np0005540697 systemd[1]: run-rdbe6bf0f5ff343f9ac23b9d08b389284.service: Deactivated successfully.
Dec  1 03:52:39 np0005540697 python3.9[54444]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  1 03:52:39 np0005540697 systemd[1]: Reloading.
Dec  1 03:52:39 np0005540697 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 03:52:39 np0005540697 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 03:52:40 np0005540697 systemd[1]: Starting Open vSwitch Database Unit...
Dec  1 03:52:40 np0005540697 chown[54486]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Dec  1 03:52:40 np0005540697 ovs-ctl[54491]: /etc/openvswitch/conf.db does not exist ... (warning).
Dec  1 03:52:40 np0005540697 ovs-ctl[54491]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Dec  1 03:52:40 np0005540697 ovs-ctl[54491]: Starting ovsdb-server [  OK  ]
Dec  1 03:52:40 np0005540697 ovs-vsctl[54540]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Dec  1 03:52:40 np0005540697 ovs-vsctl[54557]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"203a4433-d8f4-4d80-8084-548a6d57cd5d\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Dec  1 03:52:40 np0005540697 ovs-ctl[54491]: Configuring Open vSwitch system IDs [  OK  ]
Dec  1 03:52:40 np0005540697 ovs-vsctl[54565]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Dec  1 03:52:40 np0005540697 ovs-ctl[54491]: Enabling remote OVSDB managers [  OK  ]
Dec  1 03:52:40 np0005540697 systemd[1]: Started Open vSwitch Database Unit.
Dec  1 03:52:40 np0005540697 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Dec  1 03:52:40 np0005540697 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Dec  1 03:52:40 np0005540697 systemd[1]: Starting Open vSwitch Forwarding Unit...
Dec  1 03:52:40 np0005540697 kernel: openvswitch: Open vSwitch switching datapath
Dec  1 03:52:40 np0005540697 ovs-ctl[54610]: Inserting openvswitch module [  OK  ]
Dec  1 03:52:40 np0005540697 ovs-ctl[54579]: Starting ovs-vswitchd [  OK  ]
Dec  1 03:52:40 np0005540697 ovs-vsctl[54627]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Dec  1 03:52:40 np0005540697 ovs-ctl[54579]: Enabling remote OVSDB managers [  OK  ]
Dec  1 03:52:40 np0005540697 systemd[1]: Started Open vSwitch Forwarding Unit.
Dec  1 03:52:40 np0005540697 systemd[1]: Starting Open vSwitch...
Dec  1 03:52:40 np0005540697 systemd[1]: Finished Open vSwitch.
Dec  1 03:52:42 np0005540697 python3.9[54779]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 03:52:43 np0005540697 python3.9[54931]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Dec  1 03:52:45 np0005540697 kernel: SELinux:  Converting 2745 SID table entries...
Dec  1 03:52:45 np0005540697 kernel: SELinux:  policy capability network_peer_controls=1
Dec  1 03:52:45 np0005540697 kernel: SELinux:  policy capability open_perms=1
Dec  1 03:52:45 np0005540697 kernel: SELinux:  policy capability extended_socket_class=1
Dec  1 03:52:45 np0005540697 kernel: SELinux:  policy capability always_check_network=0
Dec  1 03:52:45 np0005540697 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  1 03:52:45 np0005540697 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  1 03:52:45 np0005540697 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  1 03:52:46 np0005540697 python3.9[55086]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 03:52:47 np0005540697 dbus-broker-launch[774]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Dec  1 03:52:47 np0005540697 python3.9[55244]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  1 03:52:49 np0005540697 python3.9[55397]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 03:52:51 np0005540697 python3.9[55684]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec  1 03:52:52 np0005540697 python3.9[55834]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 03:52:53 np0005540697 python3.9[55988]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  1 03:52:55 np0005540697 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  1 03:52:55 np0005540697 systemd[1]: Starting man-db-cache-update.service...
Dec  1 03:52:55 np0005540697 systemd[1]: Reloading.
Dec  1 03:52:55 np0005540697 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 03:52:55 np0005540697 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 03:52:55 np0005540697 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  1 03:52:55 np0005540697 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  1 03:52:55 np0005540697 systemd[1]: Finished man-db-cache-update.service.
Dec  1 03:52:55 np0005540697 systemd[1]: run-r8c0a240357004e6b86feea231ef42871.service: Deactivated successfully.
Dec  1 03:52:56 np0005540697 python3.9[56306]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  1 03:52:56 np0005540697 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Dec  1 03:52:56 np0005540697 systemd[1]: Stopped Network Manager Wait Online.
Dec  1 03:52:56 np0005540697 systemd[1]: Stopping Network Manager Wait Online...
Dec  1 03:52:56 np0005540697 systemd[1]: Stopping Network Manager...
Dec  1 03:52:56 np0005540697 NetworkManager[7183]: <info>  [1764579176.6352] caught SIGTERM, shutting down normally.
Dec  1 03:52:56 np0005540697 NetworkManager[7183]: <info>  [1764579176.6372] dhcp4 (eth0): canceled DHCP transaction
Dec  1 03:52:56 np0005540697 NetworkManager[7183]: <info>  [1764579176.6372] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec  1 03:52:56 np0005540697 NetworkManager[7183]: <info>  [1764579176.6372] dhcp4 (eth0): state changed no lease
Dec  1 03:52:56 np0005540697 NetworkManager[7183]: <info>  [1764579176.6376] manager: NetworkManager state is now CONNECTED_SITE
Dec  1 03:52:56 np0005540697 NetworkManager[7183]: <info>  [1764579176.6458] exiting (success)
Dec  1 03:52:56 np0005540697 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec  1 03:52:56 np0005540697 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec  1 03:52:56 np0005540697 systemd[1]: NetworkManager.service: Deactivated successfully.
Dec  1 03:52:56 np0005540697 systemd[1]: Stopped Network Manager.
Dec  1 03:52:56 np0005540697 systemd[1]: NetworkManager.service: Consumed 15.086s CPU time, 4.1M memory peak, read 0B from disk, written 30.0K to disk.
Dec  1 03:52:56 np0005540697 systemd[1]: Starting Network Manager...
Dec  1 03:52:56 np0005540697 NetworkManager[56318]: <info>  [1764579176.7157] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:92b7c342-66b4-4a80-acaf-17e049c1eafe)
Dec  1 03:52:56 np0005540697 NetworkManager[56318]: <info>  [1764579176.7160] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Dec  1 03:52:56 np0005540697 NetworkManager[56318]: <info>  [1764579176.7224] manager[0x5600d9dee090]: monitoring kernel firmware directory '/lib/firmware'.
Dec  1 03:52:56 np0005540697 systemd[1]: Starting Hostname Service...
Dec  1 03:52:56 np0005540697 systemd[1]: Started Hostname Service.
Dec  1 03:52:56 np0005540697 NetworkManager[56318]: <info>  [1764579176.8301] hostname: hostname: using hostnamed
Dec  1 03:52:56 np0005540697 NetworkManager[56318]: <info>  [1764579176.8301] hostname: static hostname changed from (none) to "compute-0"
Dec  1 03:52:56 np0005540697 NetworkManager[56318]: <info>  [1764579176.8311] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Dec  1 03:52:56 np0005540697 NetworkManager[56318]: <info>  [1764579176.8318] manager[0x5600d9dee090]: rfkill: Wi-Fi hardware radio set enabled
Dec  1 03:52:56 np0005540697 NetworkManager[56318]: <info>  [1764579176.8318] manager[0x5600d9dee090]: rfkill: WWAN hardware radio set enabled
Dec  1 03:52:56 np0005540697 NetworkManager[56318]: <info>  [1764579176.8346] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-ovs.so)
Dec  1 03:52:56 np0005540697 NetworkManager[56318]: <info>  [1764579176.8357] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Dec  1 03:52:56 np0005540697 NetworkManager[56318]: <info>  [1764579176.8358] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Dec  1 03:52:56 np0005540697 NetworkManager[56318]: <info>  [1764579176.8359] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Dec  1 03:52:56 np0005540697 NetworkManager[56318]: <info>  [1764579176.8360] manager: Networking is enabled by state file
Dec  1 03:52:56 np0005540697 NetworkManager[56318]: <info>  [1764579176.8364] settings: Loaded settings plugin: keyfile (internal)
Dec  1 03:52:56 np0005540697 NetworkManager[56318]: <info>  [1764579176.8369] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Dec  1 03:52:56 np0005540697 NetworkManager[56318]: <info>  [1764579176.8408] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Dec  1 03:52:56 np0005540697 NetworkManager[56318]: <info>  [1764579176.8425] dhcp: init: Using DHCP client 'internal'
Dec  1 03:52:56 np0005540697 NetworkManager[56318]: <info>  [1764579176.8428] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Dec  1 03:52:56 np0005540697 NetworkManager[56318]: <info>  [1764579176.8436] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  1 03:52:56 np0005540697 NetworkManager[56318]: <info>  [1764579176.8445] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Dec  1 03:52:56 np0005540697 NetworkManager[56318]: <info>  [1764579176.8454] device (lo): Activation: starting connection 'lo' (dfe05a7d-1dbe-4572-b7fc-2c1528ca0986)
Dec  1 03:52:56 np0005540697 NetworkManager[56318]: <info>  [1764579176.8463] device (eth0): carrier: link connected
Dec  1 03:52:56 np0005540697 NetworkManager[56318]: <info>  [1764579176.8468] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Dec  1 03:52:56 np0005540697 NetworkManager[56318]: <info>  [1764579176.8474] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Dec  1 03:52:56 np0005540697 NetworkManager[56318]: <info>  [1764579176.8475] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec  1 03:52:56 np0005540697 NetworkManager[56318]: <info>  [1764579176.8483] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec  1 03:52:56 np0005540697 NetworkManager[56318]: <info>  [1764579176.8492] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec  1 03:52:56 np0005540697 NetworkManager[56318]: <info>  [1764579176.8500] device (eth1): carrier: link connected
Dec  1 03:52:56 np0005540697 NetworkManager[56318]: <info>  [1764579176.8506] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Dec  1 03:52:56 np0005540697 NetworkManager[56318]: <info>  [1764579176.8512] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (703de4c6-3c25-5c70-99c2-a604be2498bd) (indicated)
Dec  1 03:52:56 np0005540697 NetworkManager[56318]: <info>  [1764579176.8514] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec  1 03:52:56 np0005540697 NetworkManager[56318]: <info>  [1764579176.8521] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec  1 03:52:56 np0005540697 NetworkManager[56318]: <info>  [1764579176.8530] device (eth1): Activation: starting connection 'ci-private-network' (703de4c6-3c25-5c70-99c2-a604be2498bd)
Dec  1 03:52:56 np0005540697 systemd[1]: Started Network Manager.
Dec  1 03:52:56 np0005540697 NetworkManager[56318]: <info>  [1764579176.8539] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Dec  1 03:52:56 np0005540697 NetworkManager[56318]: <info>  [1764579176.8554] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Dec  1 03:52:56 np0005540697 NetworkManager[56318]: <info>  [1764579176.8556] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Dec  1 03:52:56 np0005540697 NetworkManager[56318]: <info>  [1764579176.8557] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Dec  1 03:52:56 np0005540697 NetworkManager[56318]: <info>  [1764579176.8560] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec  1 03:52:56 np0005540697 NetworkManager[56318]: <info>  [1764579176.8562] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec  1 03:52:56 np0005540697 NetworkManager[56318]: <info>  [1764579176.8564] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec  1 03:52:56 np0005540697 NetworkManager[56318]: <info>  [1764579176.8567] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec  1 03:52:56 np0005540697 NetworkManager[56318]: <info>  [1764579176.8569] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Dec  1 03:52:56 np0005540697 NetworkManager[56318]: <info>  [1764579176.8574] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec  1 03:52:56 np0005540697 NetworkManager[56318]: <info>  [1764579176.8577] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec  1 03:52:56 np0005540697 NetworkManager[56318]: <info>  [1764579176.8586] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec  1 03:52:56 np0005540697 NetworkManager[56318]: <info>  [1764579176.8600] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec  1 03:52:56 np0005540697 NetworkManager[56318]: <info>  [1764579176.8616] dhcp4 (eth0): state changed new lease, address=38.102.83.13
Dec  1 03:52:56 np0005540697 NetworkManager[56318]: <info>  [1764579176.8623] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Dec  1 03:52:56 np0005540697 NetworkManager[56318]: <info>  [1764579176.8702] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec  1 03:52:56 np0005540697 NetworkManager[56318]: <info>  [1764579176.8710] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Dec  1 03:52:56 np0005540697 NetworkManager[56318]: <info>  [1764579176.8712] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec  1 03:52:56 np0005540697 NetworkManager[56318]: <info>  [1764579176.8713] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Dec  1 03:52:56 np0005540697 NetworkManager[56318]: <info>  [1764579176.8718] device (lo): Activation: successful, device activated.
Dec  1 03:52:56 np0005540697 NetworkManager[56318]: <info>  [1764579176.8725] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec  1 03:52:56 np0005540697 NetworkManager[56318]: <info>  [1764579176.8729] manager: NetworkManager state is now CONNECTED_LOCAL
Dec  1 03:52:56 np0005540697 NetworkManager[56318]: <info>  [1764579176.8733] device (eth1): Activation: successful, device activated.
Dec  1 03:52:56 np0005540697 systemd[1]: Starting Network Manager Wait Online...
Dec  1 03:52:56 np0005540697 NetworkManager[56318]: <info>  [1764579176.8742] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec  1 03:52:56 np0005540697 NetworkManager[56318]: <info>  [1764579176.8743] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec  1 03:52:56 np0005540697 NetworkManager[56318]: <info>  [1764579176.8746] manager: NetworkManager state is now CONNECTED_SITE
Dec  1 03:52:56 np0005540697 NetworkManager[56318]: <info>  [1764579176.8751] device (eth0): Activation: successful, device activated.
Dec  1 03:52:56 np0005540697 NetworkManager[56318]: <info>  [1764579176.8758] manager: NetworkManager state is now CONNECTED_GLOBAL
Dec  1 03:52:56 np0005540697 NetworkManager[56318]: <info>  [1764579176.8762] manager: startup complete
Dec  1 03:52:56 np0005540697 systemd[1]: Finished Network Manager Wait Online.
Dec  1 03:52:57 np0005540697 python3.9[56533]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  1 03:53:03 np0005540697 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  1 03:53:03 np0005540697 systemd[1]: Starting man-db-cache-update.service...
Dec  1 03:53:03 np0005540697 systemd[1]: Reloading.
Dec  1 03:53:03 np0005540697 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 03:53:03 np0005540697 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 03:53:03 np0005540697 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  1 03:53:04 np0005540697 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  1 03:53:04 np0005540697 systemd[1]: Finished man-db-cache-update.service.
Dec  1 03:53:04 np0005540697 systemd[1]: run-rcd9bc70c466242df948b7e055126412e.service: Deactivated successfully.
Dec  1 03:53:05 np0005540697 python3.9[56992]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 03:53:06 np0005540697 python3.9[57144]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:53:07 np0005540697 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec  1 03:53:07 np0005540697 python3.9[57298]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:53:08 np0005540697 python3.9[57450]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:53:08 np0005540697 python3.9[57602]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:53:09 np0005540697 python3.9[57754]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:53:10 np0005540697 python3.9[57906]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 03:53:11 np0005540697 python3.9[58029]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1764579189.81928-229-222886733068846/.source _original_basename=.3run29bv follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:53:11 np0005540697 python3.9[58181]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:53:12 np0005540697 python3.9[58333]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Dec  1 03:53:13 np0005540697 python3.9[58485]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:53:16 np0005540697 python3.9[58912]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Dec  1 03:53:17 np0005540697 ansible-async_wrapper.py[59087]: Invoked with j525523319188 300 /home/zuul/.ansible/tmp/ansible-tmp-1764579196.7493322-295-128158209768106/AnsiballZ_edpm_os_net_config.py _
Dec  1 03:53:17 np0005540697 ansible-async_wrapper.py[59090]: Starting module and watcher
Dec  1 03:53:17 np0005540697 ansible-async_wrapper.py[59090]: Start watching 59091 (300)
Dec  1 03:53:17 np0005540697 ansible-async_wrapper.py[59091]: Start module (59091)
Dec  1 03:53:17 np0005540697 ansible-async_wrapper.py[59087]: Return async_wrapper task started.
Dec  1 03:53:18 np0005540697 python3.9[59092]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Dec  1 03:53:18 np0005540697 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Dec  1 03:53:18 np0005540697 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Dec  1 03:53:18 np0005540697 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Dec  1 03:53:18 np0005540697 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Dec  1 03:53:18 np0005540697 kernel: cfg80211: failed to load regulatory.db
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.0253] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=59093 uid=0 result="success"
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.0274] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=59093 uid=0 result="success"
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.0885] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.0887] audit: op="connection-add" uuid="df73b589-bf1b-44cb-8cd2-3b011b9af1b7" name="br-ex-br" pid=59093 uid=0 result="success"
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.0904] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.0905] audit: op="connection-add" uuid="df3e6fd9-6fc1-4fdf-8490-e29c6b8e76d1" name="br-ex-port" pid=59093 uid=0 result="success"
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.0920] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.0920] audit: op="connection-add" uuid="c77c9134-d9a2-4bc9-a675-9061acf1ed2a" name="eth1-port" pid=59093 uid=0 result="success"
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.0936] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.0937] audit: op="connection-add" uuid="b349a57e-6214-4143-89bc-0b64665ca5e3" name="vlan20-port" pid=59093 uid=0 result="success"
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.0952] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.0953] audit: op="connection-add" uuid="a14f104a-10af-4c5d-9435-f30eb0cefb50" name="vlan21-port" pid=59093 uid=0 result="success"
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.0967] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.0968] audit: op="connection-add" uuid="cd9e4587-871d-4aa4-a98e-911506434e28" name="vlan22-port" pid=59093 uid=0 result="success"
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.0993] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="connection.timestamp,connection.autoconnect-priority,802-3-ethernet.mtu,ipv4.dhcp-timeout,ipv4.dhcp-client-id,ipv6.dhcp-timeout,ipv6.addr-gen-mode,ipv6.method" pid=59093 uid=0 result="success"
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.1012] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/10)
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.1013] audit: op="connection-add" uuid="7321da40-f48b-42d3-a5a2-4a61ce3c8aea" name="br-ex-if" pid=59093 uid=0 result="success"
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.1067] audit: op="connection-update" uuid="703de4c6-3c25-5c70-99c2-a604be2498bd" name="ci-private-network" args="connection.master,connection.slave-type,connection.controller,connection.timestamp,connection.port-type,ovs-external-ids.data,ovs-interface.type,ipv4.never-default,ipv4.method,ipv4.routing-rules,ipv4.dns,ipv4.addresses,ipv4.routes,ipv6.routes,ipv6.routing-rules,ipv6.method,ipv6.addr-gen-mode,ipv6.dns,ipv6.addresses" pid=59093 uid=0 result="success"
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.1100] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.1103] audit: op="connection-add" uuid="fc399742-03ef-45ca-8765-f95bd2e94ed2" name="vlan20-if" pid=59093 uid=0 result="success"
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.1132] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.1134] audit: op="connection-add" uuid="8c1de0af-5c08-449d-bc7a-c8b478f64b29" name="vlan21-if" pid=59093 uid=0 result="success"
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.1164] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.1167] audit: op="connection-add" uuid="f46883c1-6f0c-4bc7-9602-67888dee94bf" name="vlan22-if" pid=59093 uid=0 result="success"
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.1187] audit: op="connection-delete" uuid="c03566d5-28ee-35b2-b3f7-ec229e142493" name="Wired connection 1" pid=59093 uid=0 result="success"
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.1207] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.1224] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.1231] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (df73b589-bf1b-44cb-8cd2-3b011b9af1b7)
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.1232] audit: op="connection-activate" uuid="df73b589-bf1b-44cb-8cd2-3b011b9af1b7" name="br-ex-br" pid=59093 uid=0 result="success"
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.1235] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.1246] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.1252] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (df3e6fd9-6fc1-4fdf-8490-e29c6b8e76d1)
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.1255] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.1265] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.1273] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (c77c9134-d9a2-4bc9-a675-9061acf1ed2a)
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.1275] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.1286] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.1293] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (b349a57e-6214-4143-89bc-0b64665ca5e3)
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.1296] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.1307] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.1314] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (a14f104a-10af-4c5d-9435-f30eb0cefb50)
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.1316] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.1328] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.1335] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (cd9e4587-871d-4aa4-a98e-911506434e28)
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.1337] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.1341] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.1344] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.1354] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.1362] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.1369] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (7321da40-f48b-42d3-a5a2-4a61ce3c8aea)
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.1371] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.1376] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.1379] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.1382] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.1384] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.1402] device (eth1): disconnecting for new activation request.
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.1403] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.1408] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.1412] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.1414] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.1420] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.1428] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.1435] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (fc399742-03ef-45ca-8765-f95bd2e94ed2)
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.1436] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.1442] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.1445] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.1447] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.1452] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.1460] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.1467] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (8c1de0af-5c08-449d-bc7a-c8b478f64b29)
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.1468] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.1474] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.1478] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.1481] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.1485] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.1493] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.1501] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (f46883c1-6f0c-4bc7-9602-67888dee94bf)
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.1502] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.1507] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.1510] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.1512] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.1515] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.1537] audit: op="device-reapply" interface="eth0" ifindex=2 args="connection.autoconnect-priority,802-3-ethernet.mtu,ipv4.dhcp-timeout,ipv4.dhcp-client-id,ipv6.method,ipv6.addr-gen-mode" pid=59093 uid=0 result="success"
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.1541] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.1546] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.1549] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.1562] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.1568] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.1575] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.1581] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.1584] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  1 03:53:20 np0005540697 kernel: ovs-system: entered promiscuous mode
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.1593] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.1600] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.1606] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  1 03:53:20 np0005540697 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec  1 03:53:20 np0005540697 kernel: Timeout policy base is empty
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.1609] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.1618] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.1625] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.1631] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.1634] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  1 03:53:20 np0005540697 systemd-udevd[59097]: Network interface NamePolicy= disabled on kernel command line.
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.1642] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.1648] dhcp4 (eth0): canceled DHCP transaction
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.1648] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.1649] dhcp4 (eth0): state changed no lease
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.1651] dhcp4 (eth0): activation: beginning transaction (no timeout)
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.1668] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.1673] audit: op="device-reapply" interface="eth1" ifindex=3 pid=59093 uid=0 result="fail" reason="Device is not activated"
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.1717] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.1723] dhcp4 (eth0): state changed new lease, address=38.102.83.13
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.1730] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Dec  1 03:53:20 np0005540697 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.1793] device (eth1): disconnecting for new activation request.
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.1794] audit: op="connection-activate" uuid="703de4c6-3c25-5c70-99c2-a604be2498bd" name="ci-private-network" pid=59093 uid=0 result="success"
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.1796] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.1804] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.1951] device (eth1): Activation: starting connection 'ci-private-network' (703de4c6-3c25-5c70-99c2-a604be2498bd)
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.1959] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.1990] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.1997] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.2007] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.2016] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.2027] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=59093 uid=0 result="success"
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.2030] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.2033] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.2036] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.2040] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.2044] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.2051] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.2063] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.2071] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.2078] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.2086] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.2093] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.2103] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.2110] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.2118] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.2125] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.2134] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.2144] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.2150] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  1 03:53:20 np0005540697 kernel: br-ex: entered promiscuous mode
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.2214] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.2216] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.2223] device (eth1): Activation: successful, device activated.
Dec  1 03:53:20 np0005540697 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.2329] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Dec  1 03:53:20 np0005540697 kernel: vlan22: entered promiscuous mode
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.2370] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.2403] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.2405] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.2412] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Dec  1 03:53:20 np0005540697 kernel: vlan21: entered promiscuous mode
Dec  1 03:53:20 np0005540697 systemd-udevd[59098]: Network interface NamePolicy= disabled on kernel command line.
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.2540] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.2554] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  1 03:53:20 np0005540697 kernel: vlan20: entered promiscuous mode
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.2584] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.2587] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.2595] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.2651] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.2668] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.2709] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.2711] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.2721] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.2763] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.2777] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.2804] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.2810] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  1 03:53:20 np0005540697 NetworkManager[56318]: <info>  [1764579200.2818] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Dec  1 03:53:21 np0005540697 NetworkManager[56318]: <info>  [1764579201.4113] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=59093 uid=0 result="success"
Dec  1 03:53:21 np0005540697 NetworkManager[56318]: <info>  [1764579201.6570] checkpoint[0x5600d9dc3950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Dec  1 03:53:21 np0005540697 NetworkManager[56318]: <info>  [1764579201.6573] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=59093 uid=0 result="success"
Dec  1 03:53:21 np0005540697 python3.9[59425]: ansible-ansible.legacy.async_status Invoked with jid=j525523319188.59087 mode=status _async_dir=/root/.ansible_async
Dec  1 03:53:22 np0005540697 NetworkManager[56318]: <info>  [1764579202.0462] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=59093 uid=0 result="success"
Dec  1 03:53:22 np0005540697 NetworkManager[56318]: <info>  [1764579202.0477] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=59093 uid=0 result="success"
Dec  1 03:53:22 np0005540697 NetworkManager[56318]: <info>  [1764579202.2691] audit: op="networking-control" arg="global-dns-configuration" pid=59093 uid=0 result="success"
Dec  1 03:53:22 np0005540697 NetworkManager[56318]: <info>  [1764579202.2740] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Dec  1 03:53:22 np0005540697 NetworkManager[56318]: <info>  [1764579202.2777] audit: op="networking-control" arg="global-dns-configuration" pid=59093 uid=0 result="success"
Dec  1 03:53:22 np0005540697 NetworkManager[56318]: <info>  [1764579202.2812] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=59093 uid=0 result="success"
Dec  1 03:53:22 np0005540697 NetworkManager[56318]: <info>  [1764579202.4922] checkpoint[0x5600d9dc3a20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Dec  1 03:53:22 np0005540697 NetworkManager[56318]: <info>  [1764579202.4928] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=59093 uid=0 result="success"
Dec  1 03:53:22 np0005540697 ansible-async_wrapper.py[59091]: Module complete (59091)
Dec  1 03:53:22 np0005540697 ansible-async_wrapper.py[59090]: Done in kid B.
Dec  1 03:53:25 np0005540697 python3.9[59531]: ansible-ansible.legacy.async_status Invoked with jid=j525523319188.59087 mode=status _async_dir=/root/.ansible_async
Dec  1 03:53:25 np0005540697 python3.9[59631]: ansible-ansible.legacy.async_status Invoked with jid=j525523319188.59087 mode=cleanup _async_dir=/root/.ansible_async
Dec  1 03:53:26 np0005540697 python3.9[59783]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 03:53:26 np0005540697 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec  1 03:53:27 np0005540697 python3.9[59908]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764579206.054027-322-77312688576123/.source.returncode _original_basename=.svya7d4b follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:53:28 np0005540697 python3.9[60060]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 03:53:29 np0005540697 python3.9[60184]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764579207.6941912-338-160047608962277/.source.cfg _original_basename=.ub0grler follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:53:30 np0005540697 python3.9[60336]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  1 03:53:31 np0005540697 systemd[1]: Reloading Network Manager...
Dec  1 03:53:31 np0005540697 NetworkManager[56318]: <info>  [1764579211.6459] audit: op="reload" arg="0" pid=60341 uid=0 result="success"
Dec  1 03:53:31 np0005540697 NetworkManager[56318]: <info>  [1764579211.6473] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Dec  1 03:53:31 np0005540697 systemd[1]: Reloaded Network Manager.
Dec  1 03:53:32 np0005540697 systemd[1]: session-11.scope: Deactivated successfully.
Dec  1 03:53:32 np0005540697 systemd[1]: session-11.scope: Consumed 56.593s CPU time.
Dec  1 03:53:32 np0005540697 systemd-logind[792]: Session 11 logged out. Waiting for processes to exit.
Dec  1 03:53:32 np0005540697 systemd-logind[792]: Removed session 11.
Dec  1 03:53:39 np0005540697 systemd-logind[792]: New session 12 of user zuul.
Dec  1 03:53:39 np0005540697 systemd[1]: Started Session 12 of User zuul.
Dec  1 03:53:40 np0005540697 python3.9[60526]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 03:53:41 np0005540697 python3.9[60680]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  1 03:53:41 np0005540697 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec  1 03:53:42 np0005540697 python3.9[60871]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 03:53:43 np0005540697 systemd[1]: session-12.scope: Deactivated successfully.
Dec  1 03:53:43 np0005540697 systemd[1]: session-12.scope: Consumed 2.790s CPU time.
Dec  1 03:53:43 np0005540697 systemd-logind[792]: Session 12 logged out. Waiting for processes to exit.
Dec  1 03:53:43 np0005540697 systemd-logind[792]: Removed session 12.
Dec  1 03:53:49 np0005540697 systemd-logind[792]: New session 13 of user zuul.
Dec  1 03:53:49 np0005540697 systemd[1]: Started Session 13 of User zuul.
Dec  1 03:53:50 np0005540697 python3.9[61052]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 03:53:51 np0005540697 python3.9[61207]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 03:53:52 np0005540697 python3.9[61363]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  1 03:53:53 np0005540697 python3.9[61447]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  1 03:53:55 np0005540697 python3.9[61600]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  1 03:53:57 np0005540697 python3.9[61791]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:53:58 np0005540697 python3.9[61943]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 03:53:58 np0005540697 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 03:53:59 np0005540697 python3.9[62107]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 03:53:59 np0005540697 python3.9[62185]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:54:00 np0005540697 python3.9[62337]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 03:54:01 np0005540697 python3.9[62415]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 03:54:02 np0005540697 python3.9[62567]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec  1 03:54:02 np0005540697 python3.9[62719]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec  1 03:54:03 np0005540697 python3.9[62871]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec  1 03:54:04 np0005540697 python3.9[63023]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec  1 03:54:05 np0005540697 python3.9[63175]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  1 03:54:07 np0005540697 python3.9[63328]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 03:54:08 np0005540697 python3.9[63482]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 03:54:09 np0005540697 python3.9[63634]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 03:54:10 np0005540697 python3.9[63786]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 03:54:11 np0005540697 python3.9[63939]: ansible-service_facts Invoked
Dec  1 03:54:11 np0005540697 network[63956]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  1 03:54:11 np0005540697 network[63957]: 'network-scripts' will be removed from distribution in near future.
Dec  1 03:54:11 np0005540697 network[63958]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  1 03:54:17 np0005540697 python3.9[64410]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  1 03:54:20 np0005540697 python3.9[64564]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Dec  1 03:54:21 np0005540697 python3.9[64716]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 03:54:22 np0005540697 python3.9[64841]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764579260.8299131-232-252434814615614/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:54:23 np0005540697 python3.9[64995]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 03:54:23 np0005540697 python3.9[65120]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764579262.647407-247-204545318448684/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:54:25 np0005540697 python3.9[65274]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:54:26 np0005540697 python3.9[65428]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  1 03:54:27 np0005540697 python3.9[65512]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 03:54:29 np0005540697 python3.9[65666]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  1 03:54:29 np0005540697 python3.9[65750]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  1 03:54:29 np0005540697 systemd[1]: Stopping NTP client/server...
Dec  1 03:54:29 np0005540697 chronyd[795]: chronyd exiting
Dec  1 03:54:29 np0005540697 systemd[1]: chronyd.service: Deactivated successfully.
Dec  1 03:54:29 np0005540697 systemd[1]: Stopped NTP client/server.
Dec  1 03:54:29 np0005540697 systemd[1]: Starting NTP client/server...
Dec  1 03:54:30 np0005540697 chronyd[65758]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Dec  1 03:54:30 np0005540697 chronyd[65758]: Frequency -23.524 +/- 0.110 ppm read from /var/lib/chrony/drift
Dec  1 03:54:30 np0005540697 chronyd[65758]: Loaded seccomp filter (level 2)
Dec  1 03:54:30 np0005540697 systemd[1]: Started NTP client/server.
Dec  1 03:54:30 np0005540697 systemd[1]: session-13.scope: Deactivated successfully.
Dec  1 03:54:30 np0005540697 systemd[1]: session-13.scope: Consumed 29.839s CPU time.
Dec  1 03:54:30 np0005540697 systemd-logind[792]: Session 13 logged out. Waiting for processes to exit.
Dec  1 03:54:30 np0005540697 systemd-logind[792]: Removed session 13.
Dec  1 03:54:36 np0005540697 systemd-logind[792]: New session 14 of user zuul.
Dec  1 03:54:36 np0005540697 systemd[1]: Started Session 14 of User zuul.
Dec  1 03:54:37 np0005540697 python3.9[65937]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 03:54:38 np0005540697 python3.9[66093]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:54:39 np0005540697 python3.9[66268]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 03:54:39 np0005540697 python3.9[66346]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.9xsndufb recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:54:40 np0005540697 python3.9[66498]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 03:54:41 np0005540697 python3.9[66621]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764579280.4170454-61-8170171123932/.source _original_basename=.686ur57y follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:54:42 np0005540697 python3.9[66773]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  1 03:54:43 np0005540697 python3.9[66925]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 03:54:44 np0005540697 python3.9[67048]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764579283.0008328-85-97279334993581/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  1 03:54:45 np0005540697 python3.9[67200]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 03:54:45 np0005540697 python3.9[67323]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764579284.4762392-85-115911531274440/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  1 03:54:46 np0005540697 python3.9[67475]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:54:47 np0005540697 python3.9[67627]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 03:54:48 np0005540697 python3.9[67750]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764579286.8654768-122-111233989404602/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:54:49 np0005540697 python3.9[67902]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 03:54:49 np0005540697 python3.9[68025]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764579288.3395948-137-139805445182771/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:54:51 np0005540697 python3.9[68177]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 03:54:51 np0005540697 systemd[1]: Reloading.
Dec  1 03:54:51 np0005540697 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 03:54:51 np0005540697 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 03:54:51 np0005540697 systemd[1]: Reloading.
Dec  1 03:54:51 np0005540697 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 03:54:51 np0005540697 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 03:54:51 np0005540697 systemd[1]: Starting EDPM Container Shutdown...
Dec  1 03:54:51 np0005540697 systemd[1]: Finished EDPM Container Shutdown.
Dec  1 03:54:52 np0005540697 python3.9[68405]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 03:54:53 np0005540697 python3.9[68528]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764579291.7893996-160-219965145594940/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:54:53 np0005540697 python3.9[68680]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 03:54:54 np0005540697 python3.9[68803]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764579293.3024373-175-131291944479029/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:54:55 np0005540697 python3.9[68955]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 03:54:55 np0005540697 systemd[1]: Reloading.
Dec  1 03:54:55 np0005540697 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 03:54:55 np0005540697 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 03:54:55 np0005540697 systemd[1]: Reloading.
Dec  1 03:54:56 np0005540697 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 03:54:56 np0005540697 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 03:54:56 np0005540697 systemd[1]: Starting Create netns directory...
Dec  1 03:54:56 np0005540697 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec  1 03:54:56 np0005540697 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec  1 03:54:56 np0005540697 systemd[1]: Finished Create netns directory.
Dec  1 03:54:57 np0005540697 python3.9[69183]: ansible-ansible.builtin.service_facts Invoked
Dec  1 03:54:57 np0005540697 network[69200]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  1 03:54:57 np0005540697 network[69201]: 'network-scripts' will be removed from distribution in near future.
Dec  1 03:54:57 np0005540697 network[69202]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  1 03:55:01 np0005540697 python3.9[69464]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 03:55:01 np0005540697 systemd[1]: Reloading.
Dec  1 03:55:02 np0005540697 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 03:55:02 np0005540697 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 03:55:02 np0005540697 systemd[1]: Stopping IPv4 firewall with iptables...
Dec  1 03:55:02 np0005540697 iptables.init[69504]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Dec  1 03:55:02 np0005540697 iptables.init[69504]: iptables: Flushing firewall rules: [  OK  ]
Dec  1 03:55:02 np0005540697 systemd[1]: iptables.service: Deactivated successfully.
Dec  1 03:55:02 np0005540697 systemd[1]: Stopped IPv4 firewall with iptables.
Dec  1 03:55:03 np0005540697 python3.9[69701]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 03:55:04 np0005540697 python3.9[69855]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 03:55:04 np0005540697 systemd[1]: Reloading.
Dec  1 03:55:04 np0005540697 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 03:55:04 np0005540697 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 03:55:04 np0005540697 systemd[1]: Starting Netfilter Tables...
Dec  1 03:55:05 np0005540697 systemd[1]: Finished Netfilter Tables.
Dec  1 03:55:05 np0005540697 python3.9[70047]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 03:55:07 np0005540697 python3.9[70200]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 03:55:07 np0005540697 python3.9[70325]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764579306.5238576-244-105773249110310/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:55:08 np0005540697 python3.9[70478]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  1 03:55:09 np0005540697 systemd[1]: Reloading OpenSSH server daemon...
Dec  1 03:55:09 np0005540697 systemd[1]: Reloaded OpenSSH server daemon.
Dec  1 03:55:09 np0005540697 python3.9[70634]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:55:10 np0005540697 python3.9[70786]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 03:55:11 np0005540697 python3.9[70909]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764579310.1354473-275-136581019108855/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:55:12 np0005540697 python3.9[71061]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Dec  1 03:55:12 np0005540697 systemd[1]: Starting Time & Date Service...
Dec  1 03:55:12 np0005540697 systemd[1]: Started Time & Date Service.
Dec  1 03:55:13 np0005540697 python3.9[71217]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:55:14 np0005540697 python3.9[71369]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 03:55:15 np0005540697 python3.9[71492]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764579313.862793-310-216347549545888/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:55:16 np0005540697 python3.9[71644]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 03:55:16 np0005540697 python3.9[71767]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764579315.4423327-325-203446061318773/.source.yaml _original_basename=.vaule75n follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:55:17 np0005540697 python3.9[71919]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 03:55:18 np0005540697 python3.9[72042]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764579317.0058398-340-112834959497244/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:55:19 np0005540697 python3.9[72194]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 03:55:19 np0005540697 python3.9[72347]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 03:55:21 np0005540697 python3[72500]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec  1 03:55:21 np0005540697 python3.9[72652]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 03:55:22 np0005540697 python3.9[72775]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764579321.3045864-379-197487319054732/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:55:23 np0005540697 python3.9[72927]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 03:55:24 np0005540697 python3.9[73050]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764579322.8498964-394-65154243255457/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:55:24 np0005540697 python3.9[73202]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 03:55:25 np0005540697 python3.9[73325]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764579324.3346152-409-209805952206387/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:55:26 np0005540697 python3.9[73477]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 03:55:27 np0005540697 python3.9[73600]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764579325.8302052-424-204376533030113/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:55:27 np0005540697 python3.9[73752]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 03:55:28 np0005540697 python3.9[73875]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764579327.2723606-439-261740009948565/.source.nft follow=False _original_basename=ruleset.j2 checksum=15a82a0dc61abfd6aa593407582b5b950437eb80 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:55:29 np0005540697 python3.9[74027]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:55:30 np0005540697 python3.9[74179]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 03:55:31 np0005540697 python3.9[74338]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:55:32 np0005540697 python3.9[74491]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:55:32 np0005540697 python3.9[74643]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:55:33 np0005540697 python3.9[74795]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec  1 03:55:33 np0005540697 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  1 03:55:33 np0005540697 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  1 03:55:34 np0005540697 python3.9[74949]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec  1 03:55:35 np0005540697 systemd[1]: session-14.scope: Deactivated successfully.
Dec  1 03:55:35 np0005540697 systemd[1]: session-14.scope: Consumed 43.769s CPU time.
Dec  1 03:55:35 np0005540697 systemd-logind[792]: Session 14 logged out. Waiting for processes to exit.
Dec  1 03:55:35 np0005540697 systemd-logind[792]: Removed session 14.
Dec  1 03:55:41 np0005540697 systemd-logind[792]: New session 15 of user zuul.
Dec  1 03:55:41 np0005540697 systemd[1]: Started Session 15 of User zuul.
Dec  1 03:55:42 np0005540697 python3.9[75130]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Dec  1 03:55:42 np0005540697 systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec  1 03:55:43 np0005540697 python3.9[75284]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 03:55:44 np0005540697 python3.9[75436]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 03:55:45 np0005540697 python3.9[75588]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDiNLPkxMsatHkPN6H+8RU/TAvJbzQ9ak7UsYuxxiNPdGDEnAPcfd80KrzEISKE7gzIuju9jdgLUuv+4IW622YlySU1vZrJeZ3a146NfWahAizh7MoxYjvyJmRmQYzRuFZ5SP6HpiFeg2qa7w85AQtpZFE+TQnimGeqgA4GO3GiqvY/QyDSU1TSTubJgk0K8YMgbGJkHJzSuvF/sYeQyQxTFM9L8cADv4kwlP7F8BbMcsNegMBenmVl93p9XucqOWiv9sC28/vy95i+Jzlvp6wedSLaFZf/bZy8TvFWMeNsoQUKz/5WNDhlA0fKT4Zjs9o/xe3GmJOatTC2p9Qsrg45X51+MKdDcPusAEWurhCVOgy9fILNUdzIYXEqCp1OTeYAdD8VP8DxYnXpR2FYrfE98EQHHeBG0YukzP6Ns4JKH5KZCEUZJuotEv6FFhg9uDauEGMAa3c/asMygoxmh+6gDDbXzpKS4z7nUKhqr7pAYeD+j4cPX0XQP4fL9OOQdCk=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOK9OQWJi9HBz5rgUMJufk2tpg7TSX48ZqGIaovlcPm3#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGz4abR+iD9uW0KVHAJnGn/GoK9g+gHgG0AnWtic92ElLT1x4G9lBf176QZl8Xz8hz0ojstvHOnWC+0kD2kQYcY=#012 create=True mode=0644 path=/tmp/ansible.axm85pbm state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:55:46 np0005540697 python3.9[75740]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.axm85pbm' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 03:55:48 np0005540697 python3.9[75894]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.axm85pbm state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:55:48 np0005540697 systemd[1]: session-15.scope: Deactivated successfully.
Dec  1 03:55:48 np0005540697 systemd[1]: session-15.scope: Consumed 4.388s CPU time.
Dec  1 03:55:48 np0005540697 systemd-logind[792]: Session 15 logged out. Waiting for processes to exit.
Dec  1 03:55:48 np0005540697 systemd-logind[792]: Removed session 15.
Dec  1 03:55:54 np0005540697 systemd-logind[792]: New session 16 of user zuul.
Dec  1 03:55:54 np0005540697 systemd[1]: Started Session 16 of User zuul.
Dec  1 03:55:55 np0005540697 python3.9[76072]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 03:55:56 np0005540697 python3.9[76228]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Dec  1 03:55:57 np0005540697 python3.9[76382]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  1 03:55:58 np0005540697 python3.9[76535]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 03:55:59 np0005540697 python3.9[76688]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 03:56:00 np0005540697 python3.9[76842]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 03:56:01 np0005540697 python3.9[76997]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:56:01 np0005540697 systemd[1]: session-16.scope: Deactivated successfully.
Dec  1 03:56:01 np0005540697 systemd[1]: session-16.scope: Consumed 5.190s CPU time.
Dec  1 03:56:01 np0005540697 systemd-logind[792]: Session 16 logged out. Waiting for processes to exit.
Dec  1 03:56:01 np0005540697 systemd-logind[792]: Removed session 16.
Dec  1 03:56:07 np0005540697 systemd-logind[792]: New session 17 of user zuul.
Dec  1 03:56:07 np0005540697 systemd[1]: Started Session 17 of User zuul.
Dec  1 03:56:08 np0005540697 python3.9[77175]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 03:56:09 np0005540697 python3.9[77333]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  1 03:56:10 np0005540697 python3.9[77417]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec  1 03:56:12 np0005540697 python3.9[77568]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 03:56:14 np0005540697 python3.9[77719]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec  1 03:56:15 np0005540697 python3.9[77869]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 03:56:15 np0005540697 python3.9[78019]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 03:56:16 np0005540697 systemd[1]: session-17.scope: Deactivated successfully.
Dec  1 03:56:16 np0005540697 systemd[1]: session-17.scope: Consumed 6.704s CPU time.
Dec  1 03:56:16 np0005540697 systemd-logind[792]: Session 17 logged out. Waiting for processes to exit.
Dec  1 03:56:16 np0005540697 systemd-logind[792]: Removed session 17.
Dec  1 03:56:22 np0005540697 systemd-logind[792]: New session 18 of user zuul.
Dec  1 03:56:22 np0005540697 systemd[1]: Started Session 18 of User zuul.
Dec  1 03:56:23 np0005540697 python3.9[78199]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 03:56:25 np0005540697 python3.9[78355]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry-power-monitoring/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 03:56:26 np0005540697 python3.9[78507]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry-power-monitoring/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 03:56:27 np0005540697 python3.9[78659]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 03:56:27 np0005540697 python3.9[78782]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764579386.418358-65-8979382912043/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=150ec538419d9e04015200bc4501e6253834d3a0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:56:28 np0005540697 python3.9[78934]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 03:56:29 np0005540697 python3.9[79057]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764579388.1065204-65-153923717712826/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=3388db45866fd6c3eafc9d6f3f2aff111aa1e0c7 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:56:30 np0005540697 python3.9[79209]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 03:56:30 np0005540697 python3.9[79332]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764579389.4751225-65-195114691302566/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=b2999e2f84ae84489ddc5c2865d1b28fd04cdf71 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:56:31 np0005540697 python3.9[79484]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 03:56:32 np0005540697 python3.9[79636]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 03:56:33 np0005540697 python3.9[79788]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 03:56:33 np0005540697 python3.9[79911]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764579392.639651-124-47107136354149/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=d1f1eef7424f92adbd41f0694914fbe161e2e2b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:56:34 np0005540697 python3.9[80063]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 03:56:35 np0005540697 python3.9[80186]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764579394.113948-124-116406460807034/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=3388db45866fd6c3eafc9d6f3f2aff111aa1e0c7 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:56:36 np0005540697 python3.9[80338]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 03:56:36 np0005540697 python3.9[80461]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764579395.504332-124-256285910495897/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=0b57758dd04003b9b998d76970706ca1048aa27c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:56:37 np0005540697 python3.9[80613]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 03:56:38 np0005540697 python3.9[80765]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 03:56:39 np0005540697 chronyd[65758]: Selected source 23.133.168.244 (pool.ntp.org)
Dec  1 03:56:39 np0005540697 python3.9[80917]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 03:56:39 np0005540697 python3.9[81040]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764579398.6796603-183-95771181491968/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=f017553a1e424819add58505a2b75691112c3ec5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:56:40 np0005540697 python3.9[81192]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 03:56:41 np0005540697 python3.9[81315]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764579400.1065364-183-49052749365986/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=08415772c2a123b900ef141c720aa2dcfada1e3e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:56:42 np0005540697 python3.9[81467]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 03:56:42 np0005540697 python3.9[81590]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764579401.4671612-183-143198862777717/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=8bf62fe9ce65329f593b96ba39d9a248ccbe6a4e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:56:43 np0005540697 python3.9[81742]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 03:56:44 np0005540697 python3.9[81894]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 03:56:45 np0005540697 python3.9[82046]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 03:56:45 np0005540697 python3.9[82169]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764579404.6472573-242-100762426985336/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=37153a91066ceba8295ed4f2e7d0154209f13f75 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:56:46 np0005540697 python3.9[82321]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 03:56:47 np0005540697 python3.9[82444]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764579406.1818087-242-80419842393092/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=5dfaceca2e556c8b863214f9510fbf99cca08d58 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:56:47 np0005540697 python3.9[82596]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 03:56:48 np0005540697 python3.9[82719]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764579407.5379925-242-142064828503539/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=0f10f58c35c7ab4e56e72d09a885a7232fcfcc9e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:56:49 np0005540697 python3.9[82871]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 03:56:50 np0005540697 python3.9[83023]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 03:56:51 np0005540697 python3.9[83175]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 03:56:51 np0005540697 python3.9[83298]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764579410.6228967-301-127943380287759/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=7a88e43012921382b11ecaeaa2dacc022f2e276a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:56:52 np0005540697 python3.9[83450]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 03:56:53 np0005540697 python3.9[83573]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764579411.9378648-301-176186672900763/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=08415772c2a123b900ef141c720aa2dcfada1e3e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:56:53 np0005540697 python3.9[83725]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 03:56:54 np0005540697 python3.9[83848]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764579413.2714033-301-235265780585771/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=114f8adb95b254cba412a5adfc03345bf16a74c7 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:56:55 np0005540697 python3.9[84000]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 03:56:56 np0005540697 python3.9[84152]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 03:56:57 np0005540697 python3.9[84275]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764579416.173032-369-268057638860423/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=bbfb8d6cd9f3cb39afb14833aa4ef759cc4763ae backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:56:58 np0005540697 python3.9[84427]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 03:56:59 np0005540697 python3.9[84579]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 03:56:59 np0005540697 python3.9[84702]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764579418.5095515-393-262410742075907/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=bbfb8d6cd9f3cb39afb14833aa4ef759cc4763ae backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:57:00 np0005540697 python3.9[84854]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 03:57:01 np0005540697 python3.9[85006]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 03:57:01 np0005540697 python3.9[85129]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764579420.6050706-417-138482911960733/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=bbfb8d6cd9f3cb39afb14833aa4ef759cc4763ae backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:57:02 np0005540697 python3.9[85281]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 03:57:03 np0005540697 python3.9[85433]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 03:57:03 np0005540697 python3.9[85556]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764579422.7832136-441-244672472710352/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=bbfb8d6cd9f3cb39afb14833aa4ef759cc4763ae backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:57:04 np0005540697 python3.9[85708]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/telemetry setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 03:57:05 np0005540697 python3.9[85860]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 03:57:06 np0005540697 python3.9[85983]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764579425.0161924-465-40917274793944/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=bbfb8d6cd9f3cb39afb14833aa4ef759cc4763ae backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:57:06 np0005540697 python3.9[86135]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 03:57:07 np0005540697 python3.9[86287]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 03:57:08 np0005540697 python3.9[86410]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764579427.1920986-489-279026908600541/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=bbfb8d6cd9f3cb39afb14833aa4ef759cc4763ae backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:57:09 np0005540697 python3.9[86562]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 03:57:09 np0005540697 python3.9[86714]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 03:57:10 np0005540697 python3.9[86837]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764579429.3464954-513-276077949786095/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=bbfb8d6cd9f3cb39afb14833aa4ef759cc4763ae backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:57:11 np0005540697 python3.9[86989]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/telemetry-power-monitoring setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 03:57:12 np0005540697 python3.9[87141]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 03:57:12 np0005540697 python3.9[87264]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764579431.5241532-537-188702215374171/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=bbfb8d6cd9f3cb39afb14833aa4ef759cc4763ae backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:57:13 np0005540697 systemd[1]: session-18.scope: Deactivated successfully.
Dec  1 03:57:13 np0005540697 systemd[1]: session-18.scope: Consumed 40.465s CPU time.
Dec  1 03:57:13 np0005540697 systemd-logind[792]: Session 18 logged out. Waiting for processes to exit.
Dec  1 03:57:13 np0005540697 systemd-logind[792]: Removed session 18.
Dec  1 03:57:19 np0005540697 systemd-logind[792]: New session 19 of user zuul.
Dec  1 03:57:19 np0005540697 systemd[1]: Started Session 19 of User zuul.
Dec  1 03:57:20 np0005540697 python3.9[87444]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 03:57:22 np0005540697 python3.9[87600]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 03:57:22 np0005540697 python3.9[87752]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  1 03:57:23 np0005540697 python3.9[87902]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 03:57:24 np0005540697 python3.9[88054]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Dec  1 03:57:26 np0005540697 dbus-broker-launch[774]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Dec  1 03:57:26 np0005540697 python3.9[88210]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  1 03:57:27 np0005540697 python3.9[88294]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  1 03:57:30 np0005540697 python3.9[88447]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  1 03:57:31 np0005540697 python3[88602]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks#012  rule:#012    proto: udp#012    dport: 4789#012- rule_name: 119 neutron geneve networks#012  rule:#012    proto: udp#012    dport: 6081#012    state: ["UNTRACKED"]#012- rule_name: 120 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: OUTPUT#012    jump: NOTRACK#012    action: append#012    state: []#012- rule_name: 121 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: PREROUTING#012    jump: NOTRACK#012    action: append#012    state: []#012 dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Dec  1 03:57:32 np0005540697 python3.9[88754]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:57:33 np0005540697 python3.9[88906]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 03:57:34 np0005540697 python3.9[88984]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:57:34 np0005540697 python3.9[89136]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 03:57:35 np0005540697 python3.9[89214]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.ybpzof30 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:57:36 np0005540697 python3.9[89366]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 03:57:36 np0005540697 python3.9[89444]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:57:37 np0005540697 python3.9[89596]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 03:57:38 np0005540697 python3[89749]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec  1 03:57:39 np0005540697 python3.9[89901]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 03:57:40 np0005540697 python3.9[90026]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764579458.982581-157-257900127800279/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:57:41 np0005540697 python3.9[90178]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 03:57:41 np0005540697 python3.9[90303]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764579460.7258718-172-167777688437129/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:57:42 np0005540697 python3.9[90455]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 03:57:43 np0005540697 python3.9[90580]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764579462.1943512-187-52638813571980/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:57:44 np0005540697 python3.9[90732]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 03:57:45 np0005540697 python3.9[90857]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764579463.6121373-202-45870419103418/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:57:45 np0005540697 python3.9[91009]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 03:57:46 np0005540697 python3.9[91134]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764579465.2037306-217-226464528127392/.source.nft follow=False _original_basename=ruleset.j2 checksum=eb691bdb7d792c5f8ff0d719e807fe1c95b09438 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:57:47 np0005540697 python3.9[91286]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:57:48 np0005540697 python3.9[91438]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 03:57:49 np0005540697 python3.9[91593]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:57:49 np0005540697 python3.9[91745]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 03:57:50 np0005540697 python3.9[91898]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 03:57:51 np0005540697 python3.9[92052]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 03:57:52 np0005540697 python3.9[92207]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:57:53 np0005540697 python3.9[92358]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 03:57:54 np0005540697 python3.9[92511]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:1e:0a:c6:22:5a:f7" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch #012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 03:57:54 np0005540697 ovs-vsctl[92512]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:1e:0a:c6:22:5a:f7 external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Dec  1 03:57:55 np0005540697 python3.9[92664]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ovs-vsctl show | grep -q "Manager"#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 03:57:56 np0005540697 python3.9[92819]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 03:57:56 np0005540697 ovs-vsctl[92820]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Dec  1 03:57:57 np0005540697 python3.9[92970]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 03:57:58 np0005540697 python3.9[93124]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  1 03:57:58 np0005540697 python3.9[93276]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 03:57:59 np0005540697 python3.9[93354]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 03:58:00 np0005540697 python3.9[93506]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 03:58:00 np0005540697 python3.9[93584]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 03:58:01 np0005540697 python3.9[93736]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:58:02 np0005540697 python3.9[93888]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 03:58:02 np0005540697 python3.9[93966]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:58:03 np0005540697 python3.9[94118]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 03:58:04 np0005540697 python3.9[94196]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:58:04 np0005540697 python3.9[94348]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 03:58:04 np0005540697 systemd[1]: Reloading.
Dec  1 03:58:04 np0005540697 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 03:58:04 np0005540697 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 03:58:05 np0005540697 python3.9[94538]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 03:58:06 np0005540697 python3.9[94616]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:58:07 np0005540697 python3.9[94768]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 03:58:07 np0005540697 python3.9[94846]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:58:08 np0005540697 python3.9[94998]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 03:58:08 np0005540697 systemd[1]: Reloading.
Dec  1 03:58:08 np0005540697 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 03:58:08 np0005540697 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 03:58:08 np0005540697 systemd[1]: Starting Create netns directory...
Dec  1 03:58:08 np0005540697 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec  1 03:58:08 np0005540697 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec  1 03:58:08 np0005540697 systemd[1]: Finished Create netns directory.
Dec  1 03:58:09 np0005540697 python3.9[95193]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 03:58:10 np0005540697 python3.9[95345]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 03:58:10 np0005540697 python3.9[95468]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764579489.8470542-468-66194274900236/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  1 03:58:12 np0005540697 python3.9[95620]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  1 03:58:12 np0005540697 python3.9[95772]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 03:58:13 np0005540697 python3.9[95895]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764579492.2950747-493-205429807434665/.source.json _original_basename=._tbcuwot follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:58:14 np0005540697 python3.9[96047]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:58:16 np0005540697 python3.9[96474]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Dec  1 03:58:17 np0005540697 python3.9[96626]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  1 03:58:18 np0005540697 python3.9[96778]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Dec  1 03:58:18 np0005540697 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 03:58:20 np0005540697 python3[96941]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Dec  1 03:58:20 np0005540697 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 03:58:20 np0005540697 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 03:58:20 np0005540697 podman[96978]: 2025-12-01 08:58:20.520196135 +0000 UTC m=+0.057761150 container create 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.build-date=20251125)
Dec  1 03:58:20 np0005540697 podman[96978]: 2025-12-01 08:58:20.489796759 +0000 UTC m=+0.027361784 image pull 3a37a52861b2e44ebd2a63ca2589a7c9d8e4119e5feace9d19c6312ed9b8421c quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Dec  1 03:58:20 np0005540697 python3[96941]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Dec  1 03:58:21 np0005540697 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 03:58:21 np0005540697 python3.9[97169]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 03:58:22 np0005540697 python3.9[97323]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:58:22 np0005540697 python3.9[97399]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 03:58:23 np0005540697 python3.9[97550]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764579502.7858133-581-19840268907498/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:58:23 np0005540697 python3.9[97626]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  1 03:58:23 np0005540697 systemd[1]: Reloading.
Dec  1 03:58:24 np0005540697 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 03:58:24 np0005540697 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 03:58:24 np0005540697 python3.9[97738]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 03:58:24 np0005540697 systemd[1]: Reloading.
Dec  1 03:58:25 np0005540697 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 03:58:25 np0005540697 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 03:58:25 np0005540697 systemd[1]: Starting ovn_controller container...
Dec  1 03:58:25 np0005540697 systemd[1]: Created slice Virtual Machine and Container Slice.
Dec  1 03:58:25 np0005540697 systemd[1]: Started libcrun container.
Dec  1 03:58:25 np0005540697 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc769c29b168e16f1b96d34e2669cec1d95c9e0a71612397bcb5e036a834b09d/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Dec  1 03:58:25 np0005540697 systemd[1]: Started /usr/bin/podman healthcheck run 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4.
Dec  1 03:58:25 np0005540697 podman[97779]: 2025-12-01 08:58:25.383923284 +0000 UTC m=+0.149335610 container init 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Dec  1 03:58:25 np0005540697 ovn_controller[97794]: + sudo -E kolla_set_configs
Dec  1 03:58:25 np0005540697 podman[97779]: 2025-12-01 08:58:25.413569553 +0000 UTC m=+0.178981859 container start 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec  1 03:58:25 np0005540697 edpm-start-podman-container[97779]: ovn_controller
Dec  1 03:58:25 np0005540697 systemd[1]: Created slice User Slice of UID 0.
Dec  1 03:58:25 np0005540697 systemd[1]: Starting User Runtime Directory /run/user/0...
Dec  1 03:58:25 np0005540697 edpm-start-podman-container[97778]: Creating additional drop-in dependency for "ovn_controller" (8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4)
Dec  1 03:58:25 np0005540697 systemd[1]: Finished User Runtime Directory /run/user/0.
Dec  1 03:58:25 np0005540697 systemd[1]: Starting User Manager for UID 0...
Dec  1 03:58:25 np0005540697 systemd[1]: Reloading.
Dec  1 03:58:25 np0005540697 podman[97800]: 2025-12-01 08:58:25.523107407 +0000 UTC m=+0.099201144 container health_status 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Dec  1 03:58:25 np0005540697 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 03:58:25 np0005540697 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 03:58:25 np0005540697 systemd[1]: 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4-26ee0863d28cdab5.service: Main process exited, code=exited, status=1/FAILURE
Dec  1 03:58:25 np0005540697 systemd[1]: 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4-26ee0863d28cdab5.service: Failed with result 'exit-code'.
Dec  1 03:58:25 np0005540697 systemd[1]: Started ovn_controller container.
Dec  1 03:58:25 np0005540697 systemd[97844]: Queued start job for default target Main User Target.
Dec  1 03:58:25 np0005540697 systemd[97844]: Created slice User Application Slice.
Dec  1 03:58:25 np0005540697 systemd[97844]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Dec  1 03:58:25 np0005540697 systemd[97844]: Started Daily Cleanup of User's Temporary Directories.
Dec  1 03:58:25 np0005540697 systemd[97844]: Reached target Paths.
Dec  1 03:58:25 np0005540697 systemd[97844]: Reached target Timers.
Dec  1 03:58:25 np0005540697 systemd[97844]: Starting D-Bus User Message Bus Socket...
Dec  1 03:58:25 np0005540697 systemd[97844]: Starting Create User's Volatile Files and Directories...
Dec  1 03:58:25 np0005540697 systemd[97844]: Finished Create User's Volatile Files and Directories.
Dec  1 03:58:25 np0005540697 systemd[97844]: Listening on D-Bus User Message Bus Socket.
Dec  1 03:58:25 np0005540697 systemd[97844]: Reached target Sockets.
Dec  1 03:58:25 np0005540697 systemd[97844]: Reached target Basic System.
Dec  1 03:58:25 np0005540697 systemd[97844]: Reached target Main User Target.
Dec  1 03:58:25 np0005540697 systemd[97844]: Startup finished in 164ms.
Dec  1 03:58:25 np0005540697 systemd[1]: Started User Manager for UID 0.
Dec  1 03:58:25 np0005540697 systemd[1]: Started Session c1 of User root.
Dec  1 03:58:26 np0005540697 ovn_controller[97794]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec  1 03:58:26 np0005540697 ovn_controller[97794]: INFO:__main__:Validating config file
Dec  1 03:58:26 np0005540697 ovn_controller[97794]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec  1 03:58:26 np0005540697 ovn_controller[97794]: INFO:__main__:Writing out command to execute
Dec  1 03:58:26 np0005540697 systemd[1]: session-c1.scope: Deactivated successfully.
Dec  1 03:58:26 np0005540697 ovn_controller[97794]: ++ cat /run_command
Dec  1 03:58:26 np0005540697 ovn_controller[97794]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Dec  1 03:58:26 np0005540697 ovn_controller[97794]: + ARGS=
Dec  1 03:58:26 np0005540697 ovn_controller[97794]: + sudo kolla_copy_cacerts
Dec  1 03:58:26 np0005540697 systemd[1]: Started Session c2 of User root.
Dec  1 03:58:26 np0005540697 systemd[1]: session-c2.scope: Deactivated successfully.
Dec  1 03:58:26 np0005540697 ovn_controller[97794]: + [[ ! -n '' ]]
Dec  1 03:58:26 np0005540697 ovn_controller[97794]: + . kolla_extend_start
Dec  1 03:58:26 np0005540697 ovn_controller[97794]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Dec  1 03:58:26 np0005540697 ovn_controller[97794]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Dec  1 03:58:26 np0005540697 ovn_controller[97794]: + umask 0022
Dec  1 03:58:26 np0005540697 ovn_controller[97794]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Dec  1 03:58:26 np0005540697 ovn_controller[97794]: 2025-12-01T08:58:26Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Dec  1 03:58:26 np0005540697 ovn_controller[97794]: 2025-12-01T08:58:26Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Dec  1 03:58:26 np0005540697 ovn_controller[97794]: 2025-12-01T08:58:26Z|00003|main|INFO|OVN internal version is : [24.03.8-20.33.0-76.8]
Dec  1 03:58:26 np0005540697 ovn_controller[97794]: 2025-12-01T08:58:26Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Dec  1 03:58:26 np0005540697 ovn_controller[97794]: 2025-12-01T08:58:26Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Dec  1 03:58:26 np0005540697 ovn_controller[97794]: 2025-12-01T08:58:26Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Dec  1 03:58:26 np0005540697 NetworkManager[56318]: <info>  [1764579506.0937] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Dec  1 03:58:26 np0005540697 NetworkManager[56318]: <info>  [1764579506.0942] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  1 03:58:26 np0005540697 NetworkManager[56318]: <info>  [1764579506.0952] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/15)
Dec  1 03:58:26 np0005540697 NetworkManager[56318]: <info>  [1764579506.0957] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/16)
Dec  1 03:58:26 np0005540697 NetworkManager[56318]: <info>  [1764579506.0960] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Dec  1 03:58:26 np0005540697 kernel: br-int: entered promiscuous mode
Dec  1 03:58:26 np0005540697 ovn_controller[97794]: 2025-12-01T08:58:26Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Dec  1 03:58:26 np0005540697 ovn_controller[97794]: 2025-12-01T08:58:26Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec  1 03:58:26 np0005540697 ovn_controller[97794]: 2025-12-01T08:58:26Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec  1 03:58:26 np0005540697 ovn_controller[97794]: 2025-12-01T08:58:26Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Dec  1 03:58:26 np0005540697 ovn_controller[97794]: 2025-12-01T08:58:26Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Dec  1 03:58:26 np0005540697 ovn_controller[97794]: 2025-12-01T08:58:26Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Dec  1 03:58:26 np0005540697 ovn_controller[97794]: 2025-12-01T08:58:26Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Dec  1 03:58:26 np0005540697 ovn_controller[97794]: 2025-12-01T08:58:26Z|00014|main|INFO|OVS feature set changed, force recompute.
Dec  1 03:58:26 np0005540697 ovn_controller[97794]: 2025-12-01T08:58:26Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec  1 03:58:26 np0005540697 ovn_controller[97794]: 2025-12-01T08:58:26Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec  1 03:58:26 np0005540697 ovn_controller[97794]: 2025-12-01T08:58:26Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec  1 03:58:26 np0005540697 ovn_controller[97794]: 2025-12-01T08:58:26Z|00018|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Dec  1 03:58:26 np0005540697 ovn_controller[97794]: 2025-12-01T08:58:26Z|00019|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Dec  1 03:58:26 np0005540697 ovn_controller[97794]: 2025-12-01T08:58:26Z|00020|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec  1 03:58:26 np0005540697 ovn_controller[97794]: 2025-12-01T08:58:26Z|00021|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Dec  1 03:58:26 np0005540697 ovn_controller[97794]: 2025-12-01T08:58:26Z|00022|main|INFO|OVS feature set changed, force recompute.
Dec  1 03:58:26 np0005540697 ovn_controller[97794]: 2025-12-01T08:58:26Z|00023|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Dec  1 03:58:26 np0005540697 ovn_controller[97794]: 2025-12-01T08:58:26Z|00024|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Dec  1 03:58:26 np0005540697 ovn_controller[97794]: 2025-12-01T08:58:26Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec  1 03:58:26 np0005540697 ovn_controller[97794]: 2025-12-01T08:58:26Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec  1 03:58:26 np0005540697 ovn_controller[97794]: 2025-12-01T08:58:26Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec  1 03:58:26 np0005540697 ovn_controller[97794]: 2025-12-01T08:58:26Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec  1 03:58:26 np0005540697 ovn_controller[97794]: 2025-12-01T08:58:26Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec  1 03:58:26 np0005540697 ovn_controller[97794]: 2025-12-01T08:58:26Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec  1 03:58:26 np0005540697 NetworkManager[56318]: <info>  [1764579506.1141] manager: (ovn-5e5194-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Dec  1 03:58:26 np0005540697 systemd-udevd[98015]: Network interface NamePolicy= disabled on kernel command line.
Dec  1 03:58:26 np0005540697 kernel: genev_sys_6081: entered promiscuous mode
Dec  1 03:58:26 np0005540697 systemd-udevd[98017]: Network interface NamePolicy= disabled on kernel command line.
Dec  1 03:58:26 np0005540697 NetworkManager[56318]: <info>  [1764579506.1318] device (genev_sys_6081): carrier: link connected
Dec  1 03:58:26 np0005540697 NetworkManager[56318]: <info>  [1764579506.1321] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/18)
Dec  1 03:58:26 np0005540697 python3.9[98064]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 03:58:26 np0005540697 ovs-vsctl[98065]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Dec  1 03:58:27 np0005540697 python3.9[98217]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 03:58:27 np0005540697 ovs-vsctl[98219]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Dec  1 03:58:28 np0005540697 python3.9[98372]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 03:58:28 np0005540697 ovs-vsctl[98373]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Dec  1 03:58:28 np0005540697 systemd[1]: session-19.scope: Deactivated successfully.
Dec  1 03:58:28 np0005540697 systemd[1]: session-19.scope: Consumed 51.764s CPU time.
Dec  1 03:58:28 np0005540697 systemd-logind[792]: Session 19 logged out. Waiting for processes to exit.
Dec  1 03:58:28 np0005540697 systemd-logind[792]: Removed session 19.
Dec  1 03:58:34 np0005540697 systemd-logind[792]: New session 21 of user zuul.
Dec  1 03:58:34 np0005540697 systemd[1]: Started Session 21 of User zuul.
Dec  1 03:58:35 np0005540697 python3.9[98551]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 03:58:36 np0005540697 systemd[1]: Stopping User Manager for UID 0...
Dec  1 03:58:36 np0005540697 systemd[97844]: Activating special unit Exit the Session...
Dec  1 03:58:36 np0005540697 systemd[97844]: Stopped target Main User Target.
Dec  1 03:58:36 np0005540697 systemd[97844]: Stopped target Basic System.
Dec  1 03:58:36 np0005540697 systemd[97844]: Stopped target Paths.
Dec  1 03:58:36 np0005540697 systemd[97844]: Stopped target Sockets.
Dec  1 03:58:36 np0005540697 systemd[97844]: Stopped target Timers.
Dec  1 03:58:36 np0005540697 systemd[97844]: Stopped Daily Cleanup of User's Temporary Directories.
Dec  1 03:58:36 np0005540697 systemd[97844]: Closed D-Bus User Message Bus Socket.
Dec  1 03:58:36 np0005540697 systemd[97844]: Stopped Create User's Volatile Files and Directories.
Dec  1 03:58:36 np0005540697 systemd[97844]: Removed slice User Application Slice.
Dec  1 03:58:36 np0005540697 systemd[97844]: Reached target Shutdown.
Dec  1 03:58:36 np0005540697 systemd[97844]: Finished Exit the Session.
Dec  1 03:58:36 np0005540697 systemd[97844]: Reached target Exit the Session.
Dec  1 03:58:36 np0005540697 systemd[1]: user@0.service: Deactivated successfully.
Dec  1 03:58:36 np0005540697 systemd[1]: Stopped User Manager for UID 0.
Dec  1 03:58:36 np0005540697 systemd[1]: Stopping User Runtime Directory /run/user/0...
Dec  1 03:58:36 np0005540697 systemd[1]: run-user-0.mount: Deactivated successfully.
Dec  1 03:58:36 np0005540697 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Dec  1 03:58:36 np0005540697 systemd[1]: Stopped User Runtime Directory /run/user/0.
Dec  1 03:58:36 np0005540697 systemd[1]: Removed slice User Slice of UID 0.
Dec  1 03:58:36 np0005540697 python3.9[98709]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  1 03:58:37 np0005540697 python3.9[98861]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 03:58:37 np0005540697 python3.9[99014]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 03:58:38 np0005540697 python3.9[99166]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 03:58:39 np0005540697 python3.9[99319]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 03:58:39 np0005540697 python3.9[99469]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 03:58:40 np0005540697 python3.9[99621]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Dec  1 03:58:42 np0005540697 python3.9[99771]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 03:58:43 np0005540697 python3.9[99892]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764579521.674314-86-232037340070035/.source follow=False _original_basename=haproxy.j2 checksum=95c62e64c8f82dd9393a560d1b052dc98d38f810 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  1 03:58:43 np0005540697 python3.9[100042]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 03:58:44 np0005540697 python3.9[100164]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764579523.3580985-101-170002834608947/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  1 03:58:45 np0005540697 python3.9[100316]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  1 03:58:46 np0005540697 python3.9[100400]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  1 03:58:49 np0005540697 python3.9[100553]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  1 03:58:49 np0005540697 python3.9[100706]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 03:58:50 np0005540697 python3.9[100827]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764579529.3582454-138-176677030565849/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  1 03:58:51 np0005540697 python3.9[100977]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 03:58:51 np0005540697 python3.9[101098]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764579530.5935347-138-147392427578077/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  1 03:58:52 np0005540697 python3.9[101248]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 03:58:53 np0005540697 python3.9[101369]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764579532.4606302-182-64270215880683/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  1 03:58:54 np0005540697 python3.9[101519]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 03:58:54 np0005540697 python3.9[101640]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764579533.7182024-182-250273832864779/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  1 03:58:55 np0005540697 python3.9[101790]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 03:58:56 np0005540697 ovn_controller[97794]: 2025-12-01T08:58:56Z|00025|memory|INFO|16256 kB peak resident set size after 30.1 seconds
Dec  1 03:58:56 np0005540697 ovn_controller[97794]: 2025-12-01T08:58:56Z|00026|memory|INFO|idl-cells-OVN_Southbound:239 idl-cells-Open_vSwitch:471 ofctrl_desired_flow_usage-KB:5 ofctrl_installed_flow_usage-KB:4 ofctrl_sb_flow_ref_usage-KB:2
Dec  1 03:58:56 np0005540697 podman[101916]: 2025-12-01 08:58:56.189692915 +0000 UTC m=+0.117587366 container health_status 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  1 03:58:56 np0005540697 python3.9[101962]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  1 03:58:57 np0005540697 python3.9[102123]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 03:58:57 np0005540697 python3.9[102201]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 03:58:58 np0005540697 python3.9[102353]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 03:58:58 np0005540697 python3.9[102431]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 03:58:59 np0005540697 python3.9[102583]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:59:00 np0005540697 python3.9[102735]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 03:59:00 np0005540697 python3.9[102813]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:59:01 np0005540697 python3.9[102965]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 03:59:02 np0005540697 python3.9[103044]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:59:03 np0005540697 python3.9[103196]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 03:59:03 np0005540697 systemd[1]: Reloading.
Dec  1 03:59:03 np0005540697 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 03:59:03 np0005540697 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 03:59:04 np0005540697 python3.9[103384]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 03:59:04 np0005540697 python3.9[103462]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:59:05 np0005540697 python3.9[103614]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 03:59:06 np0005540697 python3.9[103692]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:59:06 np0005540697 python3.9[103844]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 03:59:06 np0005540697 systemd[1]: Reloading.
Dec  1 03:59:07 np0005540697 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 03:59:07 np0005540697 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 03:59:07 np0005540697 systemd[1]: Starting Create netns directory...
Dec  1 03:59:07 np0005540697 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec  1 03:59:07 np0005540697 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec  1 03:59:07 np0005540697 systemd[1]: Finished Create netns directory.
Dec  1 03:59:08 np0005540697 python3.9[104038]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 03:59:08 np0005540697 python3.9[104190]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 03:59:09 np0005540697 python3.9[104313]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764579548.2829967-333-32174416580480/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  1 03:59:10 np0005540697 python3.9[104467]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  1 03:59:11 np0005540697 python3.9[104619]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 03:59:12 np0005540697 python3.9[104742]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764579550.712478-358-147151651877656/.source.json _original_basename=.pxy6dbfs follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:59:12 np0005540697 python3.9[104894]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:59:15 np0005540697 python3.9[105321]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Dec  1 03:59:16 np0005540697 python3.9[105473]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  1 03:59:17 np0005540697 python3.9[105625]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Dec  1 03:59:19 np0005540697 python3[105803]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Dec  1 03:59:19 np0005540697 podman[105839]: 2025-12-01 08:59:19.628011057 +0000 UTC m=+0.073245681 container create f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec  1 03:59:19 np0005540697 podman[105839]: 2025-12-01 08:59:19.588583878 +0000 UTC m=+0.033818572 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  1 03:59:19 np0005540697 python3[105803]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  1 03:59:20 np0005540697 python3.9[106030]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 03:59:21 np0005540697 python3.9[106184]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:59:21 np0005540697 python3.9[106260]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 03:59:22 np0005540697 python3.9[106411]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764579561.856821-446-134719414053572/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:59:23 np0005540697 python3.9[106487]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  1 03:59:23 np0005540697 systemd[1]: Reloading.
Dec  1 03:59:23 np0005540697 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 03:59:23 np0005540697 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 03:59:24 np0005540697 python3.9[106597]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 03:59:24 np0005540697 systemd[1]: Reloading.
Dec  1 03:59:24 np0005540697 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 03:59:24 np0005540697 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 03:59:24 np0005540697 systemd[1]: Starting ovn_metadata_agent container...
Dec  1 03:59:24 np0005540697 systemd[1]: Started libcrun container.
Dec  1 03:59:24 np0005540697 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a4e22b31e66682f6e60ab976ee2f93eb653a217cefed5a5c2d1dcbb9bc22b8b/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Dec  1 03:59:24 np0005540697 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a4e22b31e66682f6e60ab976ee2f93eb653a217cefed5a5c2d1dcbb9bc22b8b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  1 03:59:24 np0005540697 systemd[1]: Started /usr/bin/podman healthcheck run f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed.
Dec  1 03:59:24 np0005540697 podman[106638]: 2025-12-01 08:59:24.558740627 +0000 UTC m=+0.147792055 container init f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Dec  1 03:59:24 np0005540697 ovn_metadata_agent[106654]: + sudo -E kolla_set_configs
Dec  1 03:59:24 np0005540697 podman[106638]: 2025-12-01 08:59:24.590103249 +0000 UTC m=+0.179154717 container start f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Dec  1 03:59:24 np0005540697 edpm-start-podman-container[106638]: ovn_metadata_agent
Dec  1 03:59:24 np0005540697 ovn_metadata_agent[106654]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec  1 03:59:24 np0005540697 ovn_metadata_agent[106654]: INFO:__main__:Validating config file
Dec  1 03:59:24 np0005540697 ovn_metadata_agent[106654]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec  1 03:59:24 np0005540697 ovn_metadata_agent[106654]: INFO:__main__:Copying service configuration files
Dec  1 03:59:24 np0005540697 ovn_metadata_agent[106654]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Dec  1 03:59:24 np0005540697 ovn_metadata_agent[106654]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Dec  1 03:59:24 np0005540697 ovn_metadata_agent[106654]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Dec  1 03:59:24 np0005540697 ovn_metadata_agent[106654]: INFO:__main__:Writing out command to execute
Dec  1 03:59:24 np0005540697 ovn_metadata_agent[106654]: INFO:__main__:Setting permission for /var/lib/neutron
Dec  1 03:59:24 np0005540697 ovn_metadata_agent[106654]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Dec  1 03:59:24 np0005540697 ovn_metadata_agent[106654]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Dec  1 03:59:24 np0005540697 ovn_metadata_agent[106654]: INFO:__main__:Setting permission for /var/lib/neutron/external
Dec  1 03:59:24 np0005540697 ovn_metadata_agent[106654]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Dec  1 03:59:24 np0005540697 ovn_metadata_agent[106654]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Dec  1 03:59:24 np0005540697 ovn_metadata_agent[106654]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Dec  1 03:59:24 np0005540697 ovn_metadata_agent[106654]: ++ cat /run_command
Dec  1 03:59:24 np0005540697 ovn_metadata_agent[106654]: + CMD=neutron-ovn-metadata-agent
Dec  1 03:59:24 np0005540697 ovn_metadata_agent[106654]: + ARGS=
Dec  1 03:59:24 np0005540697 ovn_metadata_agent[106654]: + sudo kolla_copy_cacerts
Dec  1 03:59:24 np0005540697 podman[106661]: 2025-12-01 08:59:24.692572073 +0000 UTC m=+0.082192491 container health_status f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec  1 03:59:24 np0005540697 edpm-start-podman-container[106637]: Creating additional drop-in dependency for "ovn_metadata_agent" (f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed)
Dec  1 03:59:24 np0005540697 ovn_metadata_agent[106654]: + [[ ! -n '' ]]
Dec  1 03:59:24 np0005540697 ovn_metadata_agent[106654]: + . kolla_extend_start
Dec  1 03:59:24 np0005540697 ovn_metadata_agent[106654]: Running command: 'neutron-ovn-metadata-agent'
Dec  1 03:59:24 np0005540697 ovn_metadata_agent[106654]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Dec  1 03:59:24 np0005540697 ovn_metadata_agent[106654]: + umask 0022
Dec  1 03:59:24 np0005540697 ovn_metadata_agent[106654]: + exec neutron-ovn-metadata-agent
Dec  1 03:59:24 np0005540697 systemd[1]: Reloading.
Dec  1 03:59:24 np0005540697 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 03:59:24 np0005540697 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 03:59:24 np0005540697 systemd[1]: Started ovn_metadata_agent container.
Dec  1 03:59:25 np0005540697 systemd[1]: session-21.scope: Deactivated successfully.
Dec  1 03:59:25 np0005540697 systemd[1]: session-21.scope: Consumed 39.024s CPU time.
Dec  1 03:59:25 np0005540697 systemd-logind[792]: Session 21 logged out. Waiting for processes to exit.
Dec  1 03:59:25 np0005540697 systemd-logind[792]: Removed session 21.
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.435 106659 INFO neutron.common.config [-] Logging enabled!#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.435 106659 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.436 106659 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.436 106659 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.436 106659 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.436 106659 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.436 106659 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.436 106659 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.437 106659 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.437 106659 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.437 106659 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.437 106659 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.437 106659 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.437 106659 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.437 106659 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.437 106659 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.437 106659 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.437 106659 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.438 106659 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.438 106659 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.438 106659 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.438 106659 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.438 106659 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.438 106659 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.438 106659 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.438 106659 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.438 106659 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.438 106659 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.439 106659 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.439 106659 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.439 106659 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.439 106659 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.439 106659 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.439 106659 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.439 106659 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.439 106659 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.439 106659 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.439 106659 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.440 106659 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.440 106659 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.440 106659 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.440 106659 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.440 106659 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.440 106659 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.440 106659 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.440 106659 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.440 106659 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.440 106659 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.441 106659 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.441 106659 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.441 106659 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.441 106659 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.441 106659 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.441 106659 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.441 106659 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.441 106659 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.442 106659 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.442 106659 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.442 106659 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.442 106659 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.442 106659 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.442 106659 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.442 106659 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.442 106659 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.442 106659 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.443 106659 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.443 106659 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.443 106659 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.443 106659 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.443 106659 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.443 106659 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.443 106659 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.443 106659 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.443 106659 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.443 106659 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.444 106659 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.444 106659 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.444 106659 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.444 106659 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.444 106659 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.444 106659 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.444 106659 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.444 106659 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.444 106659 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.445 106659 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.445 106659 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.445 106659 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.445 106659 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.445 106659 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.445 106659 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.445 106659 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.445 106659 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.445 106659 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.445 106659 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.446 106659 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.446 106659 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.446 106659 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.446 106659 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.446 106659 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.446 106659 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.446 106659 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.446 106659 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.446 106659 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.446 106659 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.446 106659 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.447 106659 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.447 106659 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.447 106659 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.447 106659 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.447 106659 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.447 106659 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.447 106659 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.447 106659 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.447 106659 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.448 106659 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.448 106659 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.448 106659 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.448 106659 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.448 106659 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.448 106659 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.448 106659 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.448 106659 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.448 106659 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.449 106659 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.449 106659 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.449 106659 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.449 106659 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.449 106659 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.449 106659 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.449 106659 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.449 106659 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.449 106659 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.450 106659 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.450 106659 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.450 106659 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.450 106659 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.450 106659 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.450 106659 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.450 106659 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.450 106659 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.450 106659 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.450 106659 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.451 106659 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.451 106659 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.451 106659 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.451 106659 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.451 106659 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.451 106659 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.451 106659 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.451 106659 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.451 106659 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.452 106659 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.452 106659 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.452 106659 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.452 106659 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.452 106659 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.452 106659 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.452 106659 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.452 106659 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.452 106659 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.453 106659 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.453 106659 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.453 106659 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.453 106659 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.453 106659 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.453 106659 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.453 106659 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.453 106659 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.453 106659 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.453 106659 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.454 106659 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.454 106659 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.454 106659 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.454 106659 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.454 106659 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.454 106659 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.454 106659 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.454 106659 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.454 106659 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.455 106659 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.455 106659 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.455 106659 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.455 106659 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.455 106659 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.455 106659 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.455 106659 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.455 106659 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.455 106659 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.456 106659 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.456 106659 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.456 106659 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.456 106659 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.456 106659 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.456 106659 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.456 106659 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.456 106659 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.456 106659 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.456 106659 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.457 106659 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.457 106659 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.457 106659 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.457 106659 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.457 106659 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.457 106659 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.457 106659 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.457 106659 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.457 106659 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.458 106659 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.458 106659 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.458 106659 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.458 106659 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.458 106659 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.458 106659 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.458 106659 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.458 106659 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.458 106659 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.458 106659 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.459 106659 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.459 106659 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.459 106659 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.459 106659 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.459 106659 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.459 106659 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.459 106659 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.459 106659 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.459 106659 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.460 106659 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.460 106659 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.460 106659 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.460 106659 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.460 106659 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.460 106659 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.460 106659 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.460 106659 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.460 106659 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.460 106659 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.461 106659 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.461 106659 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.461 106659 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.461 106659 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.461 106659 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.461 106659 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.461 106659 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.461 106659 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.461 106659 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.462 106659 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.462 106659 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.462 106659 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.462 106659 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.462 106659 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.462 106659 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.462 106659 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.462 106659 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.462 106659 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.462 106659 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.463 106659 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.463 106659 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.463 106659 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.463 106659 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.463 106659 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.463 106659 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.463 106659 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.463 106659 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.463 106659 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.464 106659 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.464 106659 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.464 106659 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.464 106659 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.464 106659 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.464 106659 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.464 106659 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.464 106659 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.464 106659 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.465 106659 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.465 106659 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.465 106659 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.465 106659 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.465 106659 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.465 106659 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.465 106659 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.465 106659 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.465 106659 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.466 106659 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.466 106659 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.466 106659 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.466 106659 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.466 106659 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.466 106659 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.466 106659 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.466 106659 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.466 106659 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.466 106659 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.467 106659 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.467 106659 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.467 106659 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.467 106659 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.467 106659 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.467 106659 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.467 106659 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.467 106659 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.467 106659 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.467 106659 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.468 106659 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.468 106659 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.477 106659 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.477 106659 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.477 106659 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.477 106659 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.478 106659 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.491 106659 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name 203a4433-d8f4-4d80-8084-548a6d57cd5d (UUID: 203a4433-d8f4-4d80-8084-548a6d57cd5d) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.521 106659 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.522 106659 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.522 106659 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.522 106659 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.524 106659 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.532 106659 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.537 106659 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', '203a4433-d8f4-4d80-8084-548a6d57cd5d'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7ff7f12c3670>], external_ids={}, name=203a4433-d8f4-4d80-8084-548a6d57cd5d, nb_cfg_timestamp=1764579514112, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.538 106659 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7ff7f12c3160>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.538 106659 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.538 106659 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.539 106659 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.539 106659 INFO oslo_service.service [-] Starting 1 workers#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.543 106659 DEBUG oslo_service.service [-] Started child 106766 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.546 106659 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmpiw1mso96/privsep.sock']#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.547 106766 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-522302'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.575 106766 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.576 106766 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.576 106766 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.580 106766 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.586 106766 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Dec  1 03:59:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:26.593 106766 INFO eventlet.wsgi.server [-] (106766) wsgi starting up on http:/var/lib/neutron/metadata_proxy#033[00m
Dec  1 03:59:26 np0005540697 podman[106770]: 2025-12-01 08:59:26.747362216 +0000 UTC m=+0.115110277 container health_status 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec  1 03:59:27 np0005540697 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Dec  1 03:59:27 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:27.248 106659 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Dec  1 03:59:27 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:27.249 106659 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpiw1mso96/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Dec  1 03:59:27 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:27.140 106797 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Dec  1 03:59:27 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:27.147 106797 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Dec  1 03:59:27 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:27.151 106797 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m
Dec  1 03:59:27 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:27.151 106797 INFO oslo.privsep.daemon [-] privsep daemon running as pid 106797#033[00m
Dec  1 03:59:27 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:27.251 106797 DEBUG oslo.privsep.daemon [-] privsep: reply[260d7379-6e05-42f1-8b05-f0ea7db160d7]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 03:59:27 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:27.739 106797 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 03:59:27 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:27.739 106797 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 03:59:27 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:27.740 106797 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.250 106797 DEBUG oslo.privsep.daemon [-] privsep: reply[2810f28b-a4a2-45b0-9ce5-ca232f88fcea]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.254 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=203a4433-d8f4-4d80-8084-548a6d57cd5d, column=external_ids, values=({'neutron:ovn-metadata-id': '2c3782f6-2832-5ca4-a543-3c860304c1aa'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.516 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=203a4433-d8f4-4d80-8084-548a6d57cd5d, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.546 106659 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.547 106659 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.547 106659 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.547 106659 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.547 106659 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.547 106659 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.548 106659 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.548 106659 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.548 106659 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.548 106659 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.548 106659 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.548 106659 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.548 106659 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.549 106659 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.549 106659 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.549 106659 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.549 106659 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.549 106659 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.549 106659 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.550 106659 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.550 106659 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.550 106659 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.550 106659 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.550 106659 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.550 106659 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.551 106659 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.551 106659 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.551 106659 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.551 106659 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.551 106659 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.552 106659 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.552 106659 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.552 106659 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.552 106659 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.552 106659 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.552 106659 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.552 106659 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.553 106659 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.553 106659 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.553 106659 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.553 106659 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.553 106659 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.554 106659 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.554 106659 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.554 106659 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.554 106659 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.554 106659 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.554 106659 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.554 106659 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.555 106659 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.555 106659 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.555 106659 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.555 106659 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.555 106659 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.555 106659 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.555 106659 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.556 106659 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.556 106659 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.556 106659 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.556 106659 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.556 106659 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.556 106659 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.556 106659 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.557 106659 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.557 106659 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.557 106659 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.557 106659 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.557 106659 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.557 106659 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.558 106659 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.558 106659 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.558 106659 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.558 106659 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.558 106659 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.558 106659 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.558 106659 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.559 106659 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.559 106659 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.559 106659 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.559 106659 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.559 106659 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.559 106659 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.560 106659 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.560 106659 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.560 106659 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.560 106659 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.560 106659 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.560 106659 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.560 106659 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.561 106659 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.561 106659 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.561 106659 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.561 106659 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.561 106659 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.561 106659 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.561 106659 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.562 106659 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.562 106659 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.562 106659 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.562 106659 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.562 106659 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.562 106659 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.562 106659 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.563 106659 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.563 106659 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.563 106659 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.563 106659 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.563 106659 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.563 106659 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.564 106659 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.564 106659 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.564 106659 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.564 106659 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.564 106659 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.565 106659 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.565 106659 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.565 106659 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.565 106659 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.565 106659 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.565 106659 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.566 106659 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.566 106659 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.566 106659 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.566 106659 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.566 106659 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.566 106659 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.567 106659 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.567 106659 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.567 106659 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.567 106659 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.567 106659 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.567 106659 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.567 106659 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.568 106659 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.568 106659 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.568 106659 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.568 106659 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.568 106659 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.569 106659 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.569 106659 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.569 106659 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.569 106659 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.569 106659 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.569 106659 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.569 106659 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.570 106659 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.570 106659 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.570 106659 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.570 106659 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.570 106659 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.570 106659 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.570 106659 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.571 106659 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.571 106659 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.571 106659 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.571 106659 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.571 106659 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.571 106659 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.571 106659 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.572 106659 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.572 106659 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.572 106659 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.572 106659 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.572 106659 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.572 106659 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.572 106659 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.573 106659 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.573 106659 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.573 106659 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.573 106659 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.573 106659 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.573 106659 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.574 106659 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.574 106659 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.574 106659 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.574 106659 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.574 106659 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.574 106659 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.575 106659 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.575 106659 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.575 106659 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.575 106659 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.575 106659 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.575 106659 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.576 106659 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.576 106659 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.576 106659 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.576 106659 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.577 106659 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.577 106659 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.577 106659 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.577 106659 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.578 106659 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.578 106659 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.578 106659 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.578 106659 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.578 106659 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.578 106659 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.579 106659 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.579 106659 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.579 106659 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.579 106659 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.579 106659 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.579 106659 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.580 106659 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.580 106659 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.580 106659 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.580 106659 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.580 106659 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.580 106659 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.580 106659 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.580 106659 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.581 106659 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.581 106659 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.581 106659 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.581 106659 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.581 106659 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.581 106659 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.581 106659 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.582 106659 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.582 106659 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.582 106659 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.582 106659 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.582 106659 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.583 106659 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.583 106659 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.583 106659 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.583 106659 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.583 106659 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.584 106659 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.584 106659 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.584 106659 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.584 106659 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.584 106659 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.584 106659 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.585 106659 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.585 106659 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.585 106659 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.585 106659 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.586 106659 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.586 106659 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.586 106659 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.586 106659 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.586 106659 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.587 106659 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.587 106659 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.587 106659 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.587 106659 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.587 106659 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.588 106659 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.588 106659 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.588 106659 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.588 106659 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.588 106659 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.588 106659 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.589 106659 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.589 106659 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.589 106659 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.589 106659 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.590 106659 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.590 106659 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.590 106659 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.590 106659 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.590 106659 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.591 106659 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.591 106659 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.591 106659 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.591 106659 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.591 106659 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.592 106659 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.592 106659 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.592 106659 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.592 106659 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.593 106659 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.593 106659 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.593 106659 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.593 106659 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.593 106659 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.594 106659 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.594 106659 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.594 106659 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.594 106659 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.594 106659 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.595 106659 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.595 106659 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.595 106659 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.595 106659 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.596 106659 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.596 106659 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.596 106659 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.596 106659 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.596 106659 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.597 106659 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.597 106659 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.597 106659 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.597 106659 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.597 106659 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.598 106659 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.598 106659 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.598 106659 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.598 106659 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.599 106659 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.599 106659 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 03:59:28 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 08:59:28.599 106659 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Dec  1 03:59:30 np0005540697 systemd-logind[792]: New session 22 of user zuul.
Dec  1 03:59:30 np0005540697 systemd[1]: Started Session 22 of User zuul.
Dec  1 03:59:31 np0005540697 python3.9[106957]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 03:59:33 np0005540697 python3.9[107114]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 03:59:34 np0005540697 python3.9[107277]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  1 03:59:34 np0005540697 systemd[1]: Reloading.
Dec  1 03:59:34 np0005540697 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 03:59:34 np0005540697 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 03:59:35 np0005540697 python3.9[107461]: ansible-ansible.builtin.service_facts Invoked
Dec  1 03:59:36 np0005540697 network[107478]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  1 03:59:36 np0005540697 network[107479]: 'network-scripts' will be removed from distribution in near future.
Dec  1 03:59:36 np0005540697 network[107480]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  1 03:59:41 np0005540697 python3.9[107742]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 03:59:42 np0005540697 python3.9[107895]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 03:59:43 np0005540697 python3.9[108048]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 03:59:43 np0005540697 python3.9[108201]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 03:59:44 np0005540697 python3.9[108354]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 03:59:45 np0005540697 python3.9[108507]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 03:59:46 np0005540697 python3.9[108660]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 03:59:47 np0005540697 python3.9[108813]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:59:48 np0005540697 python3.9[108965]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:59:49 np0005540697 python3.9[109117]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:59:49 np0005540697 python3.9[109269]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:59:50 np0005540697 python3.9[109421]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:59:51 np0005540697 python3.9[109573]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:59:52 np0005540697 python3.9[109725]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:59:52 np0005540697 python3.9[109877]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:59:53 np0005540697 python3.9[110029]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:59:54 np0005540697 python3.9[110181]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:59:55 np0005540697 podman[110305]: 2025-12-01 08:59:55.170406562 +0000 UTC m=+0.075922617 container health_status f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 03:59:55 np0005540697 python3.9[110347]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:59:55 np0005540697 python3.9[110503]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:59:56 np0005540697 python3.9[110655]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:59:57 np0005540697 podman[110780]: 2025-12-01 08:59:57.353590949 +0000 UTC m=+0.186964080 container health_status 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 03:59:57 np0005540697 python3.9[110826]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 03:59:58 np0005540697 python3.9[110984]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 03:59:59 np0005540697 python3.9[111136]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec  1 04:00:00 np0005540697 python3.9[111288]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  1 04:00:00 np0005540697 systemd[1]: Reloading.
Dec  1 04:00:00 np0005540697 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 04:00:00 np0005540697 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 04:00:01 np0005540697 python3.9[111475]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:00:01 np0005540697 python3.9[111628]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:00:02 np0005540697 python3.9[111781]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:00:03 np0005540697 python3.9[111934]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:00:04 np0005540697 python3.9[112087]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:00:04 np0005540697 python3.9[112240]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:00:05 np0005540697 python3.9[112395]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:00:07 np0005540697 python3.9[112548]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Dec  1 04:00:08 np0005540697 python3.9[112701]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec  1 04:00:09 np0005540697 python3.9[112859]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec  1 04:00:10 np0005540697 python3.9[113019]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  1 04:00:11 np0005540697 python3.9[113103]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  1 04:00:25 np0005540697 podman[113184]: 2025-12-01 09:00:25.727848574 +0000 UTC m=+0.093423889 container health_status f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Dec  1 04:00:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 09:00:26.470 106659 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 04:00:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 09:00:26.471 106659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 04:00:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 09:00:26.471 106659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 04:00:27 np0005540697 podman[113268]: 2025-12-01 09:00:27.765000866 +0000 UTC m=+0.141482564 container health_status 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  1 04:00:43 np0005540697 kernel: SELinux:  Converting 2757 SID table entries...
Dec  1 04:00:43 np0005540697 kernel: SELinux:  policy capability network_peer_controls=1
Dec  1 04:00:43 np0005540697 kernel: SELinux:  policy capability open_perms=1
Dec  1 04:00:43 np0005540697 kernel: SELinux:  policy capability extended_socket_class=1
Dec  1 04:00:43 np0005540697 kernel: SELinux:  policy capability always_check_network=0
Dec  1 04:00:43 np0005540697 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  1 04:00:43 np0005540697 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  1 04:00:43 np0005540697 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  1 04:00:53 np0005540697 kernel: SELinux:  Converting 2757 SID table entries...
Dec  1 04:00:53 np0005540697 kernel: SELinux:  policy capability network_peer_controls=1
Dec  1 04:00:53 np0005540697 kernel: SELinux:  policy capability open_perms=1
Dec  1 04:00:53 np0005540697 kernel: SELinux:  policy capability extended_socket_class=1
Dec  1 04:00:53 np0005540697 kernel: SELinux:  policy capability always_check_network=0
Dec  1 04:00:53 np0005540697 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  1 04:00:53 np0005540697 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  1 04:00:53 np0005540697 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  1 04:00:56 np0005540697 dbus-broker-launch[774]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Dec  1 04:00:56 np0005540697 podman[113360]: 2025-12-01 09:00:56.704084922 +0000 UTC m=+0.068200312 container health_status f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  1 04:00:58 np0005540697 podman[113380]: 2025-12-01 09:00:58.722364982 +0000 UTC m=+0.100118398 container health_status 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3)
Dec  1 04:01:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 09:01:26.471 106659 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 04:01:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 09:01:26.472 106659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 04:01:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 09:01:26.472 106659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 04:01:27 np0005540697 podman[124698]: 2025-12-01 09:01:27.705739723 +0000 UTC m=+0.072409406 container health_status f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Dec  1 04:01:29 np0005540697 podman[125823]: 2025-12-01 09:01:29.767181524 +0000 UTC m=+0.128702929 container health_status 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec  1 04:01:54 np0005540697 kernel: SELinux:  Converting 2758 SID table entries...
Dec  1 04:01:54 np0005540697 kernel: SELinux:  policy capability network_peer_controls=1
Dec  1 04:01:54 np0005540697 kernel: SELinux:  policy capability open_perms=1
Dec  1 04:01:54 np0005540697 kernel: SELinux:  policy capability extended_socket_class=1
Dec  1 04:01:54 np0005540697 kernel: SELinux:  policy capability always_check_network=0
Dec  1 04:01:54 np0005540697 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  1 04:01:54 np0005540697 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  1 04:01:54 np0005540697 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  1 04:01:55 np0005540697 dbus-broker-launch[767]: Noticed file-system modification, trigger reload.
Dec  1 04:01:55 np0005540697 dbus-broker-launch[774]: avc:  op=load_policy lsm=selinux seqno=14 res=1
Dec  1 04:01:55 np0005540697 dbus-broker-launch[767]: Noticed file-system modification, trigger reload.
Dec  1 04:01:58 np0005540697 podman[130333]: 2025-12-01 09:01:58.011270066 +0000 UTC m=+0.116870323 container health_status f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 04:02:00 np0005540697 podman[130365]: 2025-12-01 09:02:00.362769877 +0000 UTC m=+0.109233732 container health_status 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ovn_controller, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Dec  1 04:02:05 np0005540697 systemd[1]: Stopping OpenSSH server daemon...
Dec  1 04:02:05 np0005540697 systemd[1]: sshd.service: Deactivated successfully.
Dec  1 04:02:05 np0005540697 systemd[1]: sshd.service: Unit process 130360 (sshd-session) remains running after unit stopped.
Dec  1 04:02:05 np0005540697 systemd[1]: Stopped OpenSSH server daemon.
Dec  1 04:02:05 np0005540697 systemd[1]: sshd.service: Consumed 2.791s CPU time, 15.6M memory peak, read 564.0K from disk, written 32.0K to disk.
Dec  1 04:02:05 np0005540697 systemd[1]: Stopped target sshd-keygen.target.
Dec  1 04:02:05 np0005540697 systemd[1]: Stopping sshd-keygen.target...
Dec  1 04:02:05 np0005540697 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec  1 04:02:05 np0005540697 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec  1 04:02:05 np0005540697 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec  1 04:02:05 np0005540697 systemd[1]: Reached target sshd-keygen.target.
Dec  1 04:02:05 np0005540697 systemd[1]: Starting OpenSSH server daemon...
Dec  1 04:02:05 np0005540697 systemd[1]: Started OpenSSH server daemon.
Dec  1 04:02:08 np0005540697 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  1 04:02:08 np0005540697 systemd[1]: Starting man-db-cache-update.service...
Dec  1 04:02:08 np0005540697 systemd[1]: Reloading.
Dec  1 04:02:08 np0005540697 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 04:02:08 np0005540697 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 04:02:08 np0005540697 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  1 04:02:12 np0005540697 python3.9[135587]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  1 04:02:12 np0005540697 systemd[1]: Reloading.
Dec  1 04:02:12 np0005540697 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 04:02:12 np0005540697 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 04:02:13 np0005540697 python3.9[136836]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  1 04:02:13 np0005540697 systemd[1]: Reloading.
Dec  1 04:02:14 np0005540697 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 04:02:14 np0005540697 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 04:02:14 np0005540697 python3.9[138010]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  1 04:02:14 np0005540697 systemd[1]: Reloading.
Dec  1 04:02:15 np0005540697 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 04:02:15 np0005540697 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 04:02:16 np0005540697 python3.9[139748]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  1 04:02:16 np0005540697 systemd[1]: Reloading.
Dec  1 04:02:16 np0005540697 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 04:02:16 np0005540697 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 04:02:17 np0005540697 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  1 04:02:17 np0005540697 systemd[1]: Finished man-db-cache-update.service.
Dec  1 04:02:17 np0005540697 systemd[1]: man-db-cache-update.service: Consumed 11.288s CPU time.
Dec  1 04:02:17 np0005540697 systemd[1]: run-ra2cd2e2cf488492aaea2fb2d4f1b384a.service: Deactivated successfully.
Dec  1 04:02:18 np0005540697 python3.9[140627]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  1 04:02:18 np0005540697 systemd[1]: Reloading.
Dec  1 04:02:18 np0005540697 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 04:02:18 np0005540697 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 04:02:19 np0005540697 python3.9[140818]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  1 04:02:19 np0005540697 systemd[1]: Reloading.
Dec  1 04:02:19 np0005540697 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 04:02:19 np0005540697 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 04:02:20 np0005540697 python3.9[141008]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  1 04:02:21 np0005540697 systemd[1]: Reloading.
Dec  1 04:02:21 np0005540697 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 04:02:21 np0005540697 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 04:02:22 np0005540697 python3.9[141197]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  1 04:02:22 np0005540697 python3.9[141352]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  1 04:02:23 np0005540697 systemd[1]: Reloading.
Dec  1 04:02:23 np0005540697 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 04:02:23 np0005540697 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 04:02:24 np0005540697 python3.9[141543]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  1 04:02:24 np0005540697 systemd[1]: Reloading.
Dec  1 04:02:24 np0005540697 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 04:02:24 np0005540697 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 04:02:24 np0005540697 systemd[1]: Listening on libvirt proxy daemon socket.
Dec  1 04:02:24 np0005540697 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Dec  1 04:02:25 np0005540697 python3.9[141736]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  1 04:02:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 09:02:26.472 106659 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 04:02:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 09:02:26.474 106659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 04:02:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 09:02:26.474 106659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 04:02:27 np0005540697 python3.9[141891]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  1 04:02:28 np0005540697 python3.9[142046]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  1 04:02:28 np0005540697 podman[142048]: 2025-12-01 09:02:28.339879011 +0000 UTC m=+0.069435402 container health_status f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 04:02:29 np0005540697 python3.9[142222]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  1 04:02:30 np0005540697 python3.9[142377]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  1 04:02:30 np0005540697 podman[142504]: 2025-12-01 09:02:30.76880261 +0000 UTC m=+0.122986430 container health_status 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  1 04:02:31 np0005540697 python3.9[142549]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  1 04:02:31 np0005540697 python3.9[142714]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  1 04:02:32 np0005540697 python3.9[142870]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  1 04:02:33 np0005540697 python3.9[143026]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  1 04:02:34 np0005540697 python3.9[143181]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  1 04:02:35 np0005540697 python3.9[143336]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  1 04:02:36 np0005540697 python3.9[143491]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  1 04:02:37 np0005540697 python3.9[143646]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  1 04:02:38 np0005540697 python3.9[143801]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  1 04:02:39 np0005540697 python3.9[143956]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  1 04:02:40 np0005540697 python3.9[144108]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  1 04:02:41 np0005540697 python3.9[144260]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 04:02:41 np0005540697 python3.9[144412]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 04:02:42 np0005540697 python3.9[144564]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 04:02:43 np0005540697 python3.9[144716]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  1 04:02:44 np0005540697 python3.9[144868]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:02:45 np0005540697 python3.9[144993]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764579763.6584022-554-56594451025640/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:02:46 np0005540697 python3.9[145145]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:02:46 np0005540697 python3.9[145270]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764579765.5230255-554-29819369703129/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:02:47 np0005540697 python3.9[145423]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:02:47 np0005540697 python3.9[145548]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764579766.798231-554-27083014701345/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:02:48 np0005540697 python3.9[145700]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:02:49 np0005540697 python3.9[145825]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764579767.9828448-554-146164316174011/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:02:49 np0005540697 python3.9[145977]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:02:50 np0005540697 python3.9[146102]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764579769.380007-554-147623657633305/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:02:51 np0005540697 python3.9[146254]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:02:51 np0005540697 python3.9[146379]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764579770.6317837-554-63477610222748/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:02:52 np0005540697 python3.9[146531]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:02:52 np0005540697 python3.9[146654]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764579771.8642747-554-59553340478231/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:02:53 np0005540697 python3.9[146806]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:02:54 np0005540697 python3.9[146931]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764579773.142145-554-205341616471389/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:02:55 np0005540697 python3.9[147084]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Dec  1 04:02:55 np0005540697 python3.9[147237]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:02:56 np0005540697 python3.9[147389]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:02:57 np0005540697 python3.9[147541]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:02:57 np0005540697 python3.9[147693]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:02:58 np0005540697 python3.9[147845]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:02:58 np0005540697 podman[147893]: 2025-12-01 09:02:58.66058045 +0000 UTC m=+0.042760187 container health_status f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125)
Dec  1 04:02:59 np0005540697 python3.9[148017]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:02:59 np0005540697 python3.9[148169]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:03:00 np0005540697 python3.9[148321]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:03:01 np0005540697 podman[148445]: 2025-12-01 09:03:01.108481188 +0000 UTC m=+0.099911231 container health_status 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec  1 04:03:01 np0005540697 python3.9[148492]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:03:02 np0005540697 python3.9[148651]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:03:02 np0005540697 python3.9[148803]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:03:03 np0005540697 python3.9[148955]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:03:04 np0005540697 python3.9[149107]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:03:05 np0005540697 python3.9[149259]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:03:05 np0005540697 python3.9[149411]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:03:06 np0005540697 python3.9[149534]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764579785.2936983-775-103515615127444/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:03:07 np0005540697 python3.9[149686]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:03:07 np0005540697 python3.9[149809]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764579786.7898912-775-88978935626586/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:03:08 np0005540697 python3.9[149961]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:03:09 np0005540697 python3.9[150084]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764579788.1433022-775-56651116849310/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:03:10 np0005540697 python3.9[150236]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:03:10 np0005540697 python3.9[150361]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764579789.5480213-775-36640057954788/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:03:11 np0005540697 python3.9[150513]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:03:12 np0005540697 python3.9[150636]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764579790.9696836-775-238821842700910/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:03:12 np0005540697 python3.9[150788]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:03:13 np0005540697 python3.9[150911]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764579792.4155903-775-137191355907827/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:03:14 np0005540697 python3.9[151065]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:03:14 np0005540697 python3.9[151188]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764579793.6289139-775-78180422900955/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:03:15 np0005540697 python3.9[151340]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:03:16 np0005540697 python3.9[151463]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764579794.9683669-775-198369641679202/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:03:17 np0005540697 python3.9[151615]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:03:18 np0005540697 python3.9[151739]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764579796.5798016-775-206713229748533/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:03:19 np0005540697 python3.9[151891]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:03:19 np0005540697 python3.9[152014]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764579798.8683248-775-247526631911907/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:03:20 np0005540697 python3.9[152166]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:03:21 np0005540697 python3.9[152289]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764579800.162944-775-82130591237748/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:03:22 np0005540697 python3.9[152441]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:03:22 np0005540697 python3.9[152564]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764579801.5640242-775-78597102301935/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:03:23 np0005540697 python3.9[152716]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:03:24 np0005540697 python3.9[152839]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764579802.9534588-775-108447101781693/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:03:24 np0005540697 python3.9[152991]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:03:25 np0005540697 python3.9[153114]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764579804.4178405-775-97264043788728/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:03:26 np0005540697 python3.9[153264]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ls -lRZ /run/libvirt | grep -E ':container_\S+_t'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:03:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 09:03:26.474 106659 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 04:03:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 09:03:26.475 106659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 04:03:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 09:03:26.476 106659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 04:03:27 np0005540697 python3.9[153419]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Dec  1 04:03:29 np0005540697 dbus-broker-launch[774]: avc:  op=load_policy lsm=selinux seqno=15 res=1
Dec  1 04:03:29 np0005540697 podman[153547]: 2025-12-01 09:03:29.259312983 +0000 UTC m=+0.068893477 container health_status f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec  1 04:03:29 np0005540697 python3.9[153594]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:03:30 np0005540697 python3.9[153746]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:03:30 np0005540697 python3.9[153898]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:03:31 np0005540697 podman[154050]: 2025-12-01 09:03:31.264658067 +0000 UTC m=+0.092003667 container health_status 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Dec  1 04:03:31 np0005540697 python3.9[154051]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:03:32 np0005540697 python3.9[154226]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:03:32 np0005540697 python3.9[154378]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:03:33 np0005540697 python3.9[154530]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:03:34 np0005540697 python3.9[154682]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:03:34 np0005540697 python3.9[154834]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:03:35 np0005540697 python3.9[154986]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:03:36 np0005540697 python3.9[155138]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  1 04:03:36 np0005540697 systemd[1]: Reloading.
Dec  1 04:03:36 np0005540697 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 04:03:36 np0005540697 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 04:03:36 np0005540697 systemd[1]: Starting libvirt logging daemon socket...
Dec  1 04:03:36 np0005540697 systemd[1]: Listening on libvirt logging daemon socket.
Dec  1 04:03:36 np0005540697 systemd[1]: Starting libvirt logging daemon admin socket...
Dec  1 04:03:36 np0005540697 systemd[1]: Listening on libvirt logging daemon admin socket.
Dec  1 04:03:36 np0005540697 systemd[1]: Starting libvirt logging daemon...
Dec  1 04:03:36 np0005540697 systemd[1]: Started libvirt logging daemon.
Dec  1 04:03:37 np0005540697 python3.9[155332]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  1 04:03:37 np0005540697 systemd[1]: Reloading.
Dec  1 04:03:37 np0005540697 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 04:03:37 np0005540697 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 04:03:38 np0005540697 systemd[1]: Starting libvirt nodedev daemon socket...
Dec  1 04:03:38 np0005540697 systemd[1]: Listening on libvirt nodedev daemon socket.
Dec  1 04:03:38 np0005540697 systemd[1]: Starting libvirt nodedev daemon admin socket...
Dec  1 04:03:38 np0005540697 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Dec  1 04:03:38 np0005540697 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Dec  1 04:03:38 np0005540697 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Dec  1 04:03:38 np0005540697 systemd[1]: Starting libvirt nodedev daemon...
Dec  1 04:03:38 np0005540697 systemd[1]: Started libvirt nodedev daemon.
Dec  1 04:03:38 np0005540697 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Dec  1 04:03:38 np0005540697 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Dec  1 04:03:38 np0005540697 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Dec  1 04:03:38 np0005540697 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Dec  1 04:03:38 np0005540697 python3.9[155557]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  1 04:03:38 np0005540697 systemd[1]: Reloading.
Dec  1 04:03:39 np0005540697 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 04:03:39 np0005540697 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 04:03:39 np0005540697 systemd[1]: Starting libvirt proxy daemon admin socket...
Dec  1 04:03:39 np0005540697 systemd[1]: Starting libvirt proxy daemon read-only socket...
Dec  1 04:03:39 np0005540697 systemd[1]: Listening on libvirt proxy daemon admin socket.
Dec  1 04:03:39 np0005540697 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Dec  1 04:03:39 np0005540697 systemd[1]: Starting libvirt proxy daemon...
Dec  1 04:03:39 np0005540697 systemd[1]: Started libvirt proxy daemon.
Dec  1 04:03:39 np0005540697 setroubleshoot[155396]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 8613b59e-9e48-4193-8c9b-5e3dcc427fcd
Dec  1 04:03:39 np0005540697 setroubleshoot[155396]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Dec  1 04:03:39 np0005540697 setroubleshoot[155396]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 8613b59e-9e48-4193-8c9b-5e3dcc427fcd
Dec  1 04:03:39 np0005540697 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  1 04:03:39 np0005540697 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  1 04:03:39 np0005540697 setroubleshoot[155396]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Dec  1 04:03:40 np0005540697 python3.9[155772]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  1 04:03:40 np0005540697 systemd[1]: Reloading.
Dec  1 04:03:40 np0005540697 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 04:03:40 np0005540697 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 04:03:40 np0005540697 systemd[1]: Listening on libvirt locking daemon socket.
Dec  1 04:03:40 np0005540697 systemd[1]: Starting libvirt QEMU daemon socket...
Dec  1 04:03:40 np0005540697 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Dec  1 04:03:40 np0005540697 systemd[1]: Starting Virtual Machine and Container Registration Service...
Dec  1 04:03:40 np0005540697 systemd[1]: Listening on libvirt QEMU daemon socket.
Dec  1 04:03:40 np0005540697 systemd[1]: Starting libvirt QEMU daemon admin socket...
Dec  1 04:03:40 np0005540697 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Dec  1 04:03:40 np0005540697 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Dec  1 04:03:40 np0005540697 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Dec  1 04:03:40 np0005540697 systemd[1]: Started Virtual Machine and Container Registration Service.
Dec  1 04:03:40 np0005540697 systemd[1]: Starting libvirt QEMU daemon...
Dec  1 04:03:40 np0005540697 systemd[1]: Started libvirt QEMU daemon.
Dec  1 04:03:41 np0005540697 python3.9[155987]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  1 04:03:41 np0005540697 systemd[1]: Reloading.
Dec  1 04:03:41 np0005540697 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 04:03:41 np0005540697 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 04:03:41 np0005540697 systemd[1]: Starting libvirt secret daemon socket...
Dec  1 04:03:41 np0005540697 systemd[1]: Listening on libvirt secret daemon socket.
Dec  1 04:03:41 np0005540697 systemd[1]: Starting libvirt secret daemon admin socket...
Dec  1 04:03:41 np0005540697 systemd[1]: Starting libvirt secret daemon read-only socket...
Dec  1 04:03:41 np0005540697 systemd[1]: Listening on libvirt secret daemon admin socket.
Dec  1 04:03:41 np0005540697 systemd[1]: Listening on libvirt secret daemon read-only socket.
Dec  1 04:03:41 np0005540697 systemd[1]: Starting libvirt secret daemon...
Dec  1 04:03:41 np0005540697 systemd[1]: Started libvirt secret daemon.
Dec  1 04:03:42 np0005540697 python3.9[156198]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:03:43 np0005540697 python3.9[156350]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec  1 04:03:44 np0005540697 python3.9[156502]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:03:44 np0005540697 python3.9[156625]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764579823.7261465-1120-222659893615900/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:03:45 np0005540697 python3.9[156777]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:03:46 np0005540697 python3.9[156929]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:03:46 np0005540697 python3.9[157007]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:03:47 np0005540697 python3.9[157159]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:03:47 np0005540697 python3.9[157238]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.gd0e2k5t recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:03:48 np0005540697 python3.9[157390]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:03:49 np0005540697 python3.9[157468]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:03:49 np0005540697 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Dec  1 04:03:49 np0005540697 systemd[1]: setroubleshootd.service: Deactivated successfully.
Dec  1 04:03:50 np0005540697 python3.9[157620]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:03:51 np0005540697 python3[157773]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec  1 04:03:51 np0005540697 python3.9[157925]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:03:52 np0005540697 python3.9[158003]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:03:53 np0005540697 python3.9[158155]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:03:53 np0005540697 python3.9[158233]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:03:54 np0005540697 python3.9[158385]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:03:55 np0005540697 python3.9[158463]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:03:56 np0005540697 python3.9[158615]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:03:56 np0005540697 python3.9[158693]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:03:57 np0005540697 python3.9[158845]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:03:58 np0005540697 python3.9[158970]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764579837.069971-1245-217894602606085/.source.nft follow=False _original_basename=ruleset.j2 checksum=8a12d4eb5149b6e500230381c1359a710881e9b0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:03:59 np0005540697 python3.9[159122]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:03:59 np0005540697 podman[159222]: 2025-12-01 09:03:59.697883809 +0000 UTC m=+0.061531740 container health_status f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec  1 04:03:59 np0005540697 python3.9[159296]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:04:00 np0005540697 python3.9[159451]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:04:01 np0005540697 podman[159551]: 2025-12-01 09:04:01.728024549 +0000 UTC m=+0.102648595 container health_status 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Dec  1 04:04:01 np0005540697 python3.9[159629]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:04:02 np0005540697 python3.9[159782]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 04:04:03 np0005540697 python3.9[159936]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:04:04 np0005540697 python3.9[160091]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:04:05 np0005540697 python3.9[160243]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:04:05 np0005540697 python3.9[160366]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764579844.4715896-1317-145950921347840/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:04:06 np0005540697 python3.9[160518]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:04:06 np0005540697 python3.9[160641]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764579845.8025296-1332-68537834579388/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:04:07 np0005540697 python3.9[160793]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:04:08 np0005540697 python3.9[160916]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764579847.196331-1347-195988947873130/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:04:09 np0005540697 python3.9[161068]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 04:04:09 np0005540697 systemd[1]: Reloading.
Dec  1 04:04:09 np0005540697 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 04:04:09 np0005540697 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 04:04:09 np0005540697 systemd[1]: Reached target edpm_libvirt.target.
Dec  1 04:04:10 np0005540697 python3.9[161262]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Dec  1 04:04:10 np0005540697 systemd[1]: Reloading.
Dec  1 04:04:10 np0005540697 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 04:04:10 np0005540697 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 04:04:11 np0005540697 systemd[1]: Reloading.
Dec  1 04:04:11 np0005540697 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 04:04:11 np0005540697 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 04:04:11 np0005540697 systemd[1]: session-22.scope: Deactivated successfully.
Dec  1 04:04:11 np0005540697 systemd[1]: session-22.scope: Consumed 3min 38.947s CPU time.
Dec  1 04:04:11 np0005540697 systemd-logind[792]: Session 22 logged out. Waiting for processes to exit.
Dec  1 04:04:11 np0005540697 systemd-logind[792]: Removed session 22.
Dec  1 04:04:18 np0005540697 systemd-logind[792]: New session 23 of user zuul.
Dec  1 04:04:18 np0005540697 systemd[1]: Started Session 23 of User zuul.
Dec  1 04:04:19 np0005540697 python3.9[161515]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 04:04:20 np0005540697 python3.9[161669]: ansible-ansible.builtin.service_facts Invoked
Dec  1 04:04:20 np0005540697 network[161686]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  1 04:04:20 np0005540697 network[161687]: 'network-scripts' will be removed from distribution in near future.
Dec  1 04:04:20 np0005540697 network[161688]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  1 04:04:25 np0005540697 python3.9[161960]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  1 04:04:26 np0005540697 python3.9[162044]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  1 04:04:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 09:04:26.474 106659 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 04:04:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 09:04:26.475 106659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 04:04:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 09:04:26.475 106659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 04:04:30 np0005540697 podman[162046]: 2025-12-01 09:04:30.725993015 +0000 UTC m=+0.074098404 container health_status f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent)
Dec  1 04:04:32 np0005540697 podman[162191]: 2025-12-01 09:04:32.284933234 +0000 UTC m=+0.100653341 container health_status 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  1 04:04:32 np0005540697 python3.9[162242]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 04:04:33 np0005540697 python3.9[162397]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:04:34 np0005540697 python3.9[162550]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 04:04:34 np0005540697 python3.9[162702]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:04:35 np0005540697 python3.9[162855]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:04:36 np0005540697 python3.9[162978]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764579875.107363-95-244140941243527/.source.iscsi _original_basename=.bh2h0g6i follow=False checksum=e67c41388ad853dfd0c43d8ec15e231ebddfb5e1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:04:37 np0005540697 python3.9[163130]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:04:38 np0005540697 python3.9[163282]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:04:39 np0005540697 python3.9[163434]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 04:04:39 np0005540697 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Dec  1 04:04:40 np0005540697 python3.9[163590]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 04:04:40 np0005540697 systemd[1]: Reloading.
Dec  1 04:04:40 np0005540697 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 04:04:40 np0005540697 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 04:04:40 np0005540697 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Dec  1 04:04:40 np0005540697 systemd[1]: Starting Open-iSCSI...
Dec  1 04:04:40 np0005540697 kernel: Loading iSCSI transport class v2.0-870.
Dec  1 04:04:40 np0005540697 systemd[1]: Started Open-iSCSI.
Dec  1 04:04:40 np0005540697 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Dec  1 04:04:40 np0005540697 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Dec  1 04:04:42 np0005540697 python3.9[163793]: ansible-ansible.builtin.service_facts Invoked
Dec  1 04:04:42 np0005540697 network[163810]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  1 04:04:42 np0005540697 network[163811]: 'network-scripts' will be removed from distribution in near future.
Dec  1 04:04:42 np0005540697 network[163812]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  1 04:04:47 np0005540697 python3.9[164083]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec  1 04:04:48 np0005540697 python3.9[164235]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Dec  1 04:04:49 np0005540697 python3.9[164391]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:04:49 np0005540697 python3.9[164514]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764579888.5697095-172-203276383958557/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:04:50 np0005540697 python3.9[164666]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:04:51 np0005540697 python3.9[164818]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  1 04:04:51 np0005540697 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Dec  1 04:04:51 np0005540697 systemd[1]: Stopped Load Kernel Modules.
Dec  1 04:04:51 np0005540697 systemd[1]: Stopping Load Kernel Modules...
Dec  1 04:04:51 np0005540697 systemd[1]: Starting Load Kernel Modules...
Dec  1 04:04:51 np0005540697 systemd[1]: Finished Load Kernel Modules.
Dec  1 04:04:52 np0005540697 python3.9[164974]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  1 04:04:53 np0005540697 python3.9[165126]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 04:04:54 np0005540697 python3.9[165278]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 04:04:55 np0005540697 python3.9[165430]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:04:55 np0005540697 python3.9[165553]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764579894.5509658-230-267161499735725/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:04:56 np0005540697 python3.9[165705]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:04:57 np0005540697 python3.9[165858]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:04:57 np0005540697 python3.9[166010]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:04:58 np0005540697 python3.9[166162]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:04:59 np0005540697 python3.9[166314]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:04:59 np0005540697 python3.9[166466]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:05:00 np0005540697 python3.9[166618]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:05:00 np0005540697 podman[166742]: 2025-12-01 09:05:00.840890268 +0000 UTC m=+0.054059871 container health_status f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 04:05:01 np0005540697 python3.9[166790]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:05:01 np0005540697 python3.9[166942]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 04:05:02 np0005540697 python3.9[167096]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/multipath/.multipath_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:05:02 np0005540697 podman[167165]: 2025-12-01 09:05:02.697112919 +0000 UTC m=+0.069692817 container health_status 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_managed=true)
Dec  1 04:05:03 np0005540697 python3.9[167272]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  1 04:05:03 np0005540697 python3.9[167424]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:05:04 np0005540697 python3.9[167502]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 04:05:04 np0005540697 python3.9[167654]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:05:05 np0005540697 python3.9[167732]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 04:05:05 np0005540697 python3.9[167884]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:05:06 np0005540697 python3.9[168036]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:05:06 np0005540697 python3.9[168114]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:05:07 np0005540697 python3.9[168266]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:05:08 np0005540697 python3.9[168344]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:05:08 np0005540697 python3.9[168496]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 04:05:08 np0005540697 systemd[1]: Reloading.
Dec  1 04:05:09 np0005540697 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 04:05:09 np0005540697 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 04:05:10 np0005540697 python3.9[168685]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:05:10 np0005540697 python3.9[168765]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:05:11 np0005540697 python3.9[168917]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:05:11 np0005540697 python3.9[168995]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:05:12 np0005540697 python3.9[169147]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 04:05:12 np0005540697 systemd[1]: Reloading.
Dec  1 04:05:12 np0005540697 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 04:05:12 np0005540697 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 04:05:13 np0005540697 systemd[1]: Starting Create netns directory...
Dec  1 04:05:13 np0005540697 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec  1 04:05:13 np0005540697 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec  1 04:05:13 np0005540697 systemd[1]: Finished Create netns directory.
Dec  1 04:05:13 np0005540697 python3.9[169342]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 04:05:14 np0005540697 python3.9[169494]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:05:15 np0005540697 python3.9[169617]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/multipathd/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764579914.1352289-437-125199700077409/.source _original_basename=healthcheck follow=False checksum=af9d0c1c8f3cb0e30ce9609be9d5b01924d0d23f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  1 04:05:16 np0005540697 python3.9[169769]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  1 04:05:17 np0005540697 python3.9[169921]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:05:17 np0005540697 python3.9[170044]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/multipathd.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764579916.6251042-462-184202825461040/.source.json _original_basename=.8ssj_o3b follow=False checksum=3f7959ee8ac9757398adcc451c3b416c957d7c14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:05:18 np0005540697 python3.9[170196]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:05:20 np0005540697 python3.9[170625]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False
Dec  1 04:05:21 np0005540697 python3.9[170777]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  1 04:05:22 np0005540697 python3.9[170929]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Dec  1 04:05:23 np0005540697 python3[171107]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Dec  1 04:05:24 np0005540697 podman[171143]: 2025-12-01 09:05:23.933147228 +0000 UTC m=+0.028743480 image pull 9af6aa52ee187025bc25565b66d3eefb486acac26f9281e33f4cce76a40d21f7 quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Dec  1 04:05:24 np0005540697 podman[171143]: 2025-12-01 09:05:24.968785626 +0000 UTC m=+1.064381848 container create 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 04:05:24 np0005540697 python3[171107]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name multipathd --conmon-pidfile /run/multipathd.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=multipathd --label container_name=multipathd --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run/udev:/run/udev --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /var/lib/openstack/healthchecks/multipathd:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Dec  1 04:05:25 np0005540697 python3.9[171332]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 04:05:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 09:05:26.475 106659 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 04:05:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 09:05:26.477 106659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 04:05:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 09:05:26.477 106659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 04:05:26 np0005540697 python3.9[171486]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:05:26 np0005540697 python3.9[171562]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 04:05:27 np0005540697 python3.9[171713]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764579927.0831664-550-216617785173316/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:05:28 np0005540697 python3.9[171789]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  1 04:05:28 np0005540697 systemd[1]: Reloading.
Dec  1 04:05:28 np0005540697 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 04:05:28 np0005540697 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 04:05:29 np0005540697 python3.9[171899]: ansible-systemd Invoked with state=restarted name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 04:05:29 np0005540697 systemd[1]: Reloading.
Dec  1 04:05:29 np0005540697 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 04:05:29 np0005540697 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 04:05:30 np0005540697 systemd[1]: Starting multipathd container...
Dec  1 04:05:30 np0005540697 systemd[1]: Started libcrun container.
Dec  1 04:05:30 np0005540697 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dde9441456a328364bee503d59409651e403471a4ad878479ccbac347b3d2885/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec  1 04:05:30 np0005540697 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dde9441456a328364bee503d59409651e403471a4ad878479ccbac347b3d2885/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec  1 04:05:30 np0005540697 systemd[1]: Started /usr/bin/podman healthcheck run 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed.
Dec  1 04:05:30 np0005540697 podman[171939]: 2025-12-01 09:05:30.735178409 +0000 UTC m=+0.353021425 container init 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec  1 04:05:30 np0005540697 multipathd[171954]: + sudo -E kolla_set_configs
Dec  1 04:05:30 np0005540697 podman[171939]: 2025-12-01 09:05:30.758336259 +0000 UTC m=+0.376179255 container start 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  1 04:05:30 np0005540697 multipathd[171954]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec  1 04:05:30 np0005540697 multipathd[171954]: INFO:__main__:Validating config file
Dec  1 04:05:30 np0005540697 multipathd[171954]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec  1 04:05:30 np0005540697 multipathd[171954]: INFO:__main__:Writing out command to execute
Dec  1 04:05:30 np0005540697 multipathd[171954]: ++ cat /run_command
Dec  1 04:05:30 np0005540697 multipathd[171954]: + CMD='/usr/sbin/multipathd -d'
Dec  1 04:05:30 np0005540697 multipathd[171954]: + ARGS=
Dec  1 04:05:30 np0005540697 multipathd[171954]: + sudo kolla_copy_cacerts
Dec  1 04:05:30 np0005540697 multipathd[171954]: + [[ ! -n '' ]]
Dec  1 04:05:30 np0005540697 multipathd[171954]: + . kolla_extend_start
Dec  1 04:05:30 np0005540697 multipathd[171954]: Running command: '/usr/sbin/multipathd -d'
Dec  1 04:05:30 np0005540697 multipathd[171954]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Dec  1 04:05:30 np0005540697 multipathd[171954]: + umask 0022
Dec  1 04:05:30 np0005540697 multipathd[171954]: + exec /usr/sbin/multipathd -d
Dec  1 04:05:30 np0005540697 multipathd[171954]: 3170.605323 | --------start up--------
Dec  1 04:05:30 np0005540697 multipathd[171954]: 3170.605346 | read /etc/multipath.conf
Dec  1 04:05:30 np0005540697 multipathd[171954]: 3170.612153 | path checkers start up
Dec  1 04:05:31 np0005540697 podman[171939]: multipathd
Dec  1 04:05:31 np0005540697 systemd[1]: Started multipathd container.
Dec  1 04:05:31 np0005540697 podman[171985]: 2025-12-01 09:05:31.313863461 +0000 UTC m=+0.055973405 container health_status f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Dec  1 04:05:31 np0005540697 podman[171961]: 2025-12-01 09:05:31.365137395 +0000 UTC m=+0.596343844 container health_status 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  1 04:05:31 np0005540697 python3.9[172162]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 04:05:32 np0005540697 python3.9[172316]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:05:33 np0005540697 podman[172453]: 2025-12-01 09:05:33.096719547 +0000 UTC m=+0.079658217 container health_status 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec  1 04:05:33 np0005540697 python3.9[172501]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  1 04:05:33 np0005540697 systemd[1]: Stopping multipathd container...
Dec  1 04:05:33 np0005540697 multipathd[171954]: 3173.246339 | exit (signal)
Dec  1 04:05:33 np0005540697 multipathd[171954]: 3173.246463 | --------shut down-------
Dec  1 04:05:33 np0005540697 systemd[1]: libpod-5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed.scope: Deactivated successfully.
Dec  1 04:05:33 np0005540697 podman[172511]: 2025-12-01 09:05:33.530037783 +0000 UTC m=+0.075409368 container died 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  1 04:05:33 np0005540697 systemd[1]: 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed-7cd0973274634a9f.timer: Deactivated successfully.
Dec  1 04:05:33 np0005540697 systemd[1]: Stopped /usr/bin/podman healthcheck run 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed.
Dec  1 04:05:33 np0005540697 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed-userdata-shm.mount: Deactivated successfully.
Dec  1 04:05:33 np0005540697 systemd[1]: var-lib-containers-storage-overlay-dde9441456a328364bee503d59409651e403471a4ad878479ccbac347b3d2885-merged.mount: Deactivated successfully.
Dec  1 04:05:33 np0005540697 podman[172511]: 2025-12-01 09:05:33.590014819 +0000 UTC m=+0.135386424 container cleanup 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  1 04:05:33 np0005540697 podman[172511]: multipathd
Dec  1 04:05:33 np0005540697 podman[172540]: multipathd
Dec  1 04:05:33 np0005540697 systemd[1]: edpm_multipathd.service: Deactivated successfully.
Dec  1 04:05:33 np0005540697 systemd[1]: Stopped multipathd container.
Dec  1 04:05:33 np0005540697 systemd[1]: Starting multipathd container...
Dec  1 04:05:33 np0005540697 systemd[1]: Started libcrun container.
Dec  1 04:05:33 np0005540697 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dde9441456a328364bee503d59409651e403471a4ad878479ccbac347b3d2885/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec  1 04:05:33 np0005540697 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dde9441456a328364bee503d59409651e403471a4ad878479ccbac347b3d2885/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec  1 04:05:33 np0005540697 systemd[1]: Started /usr/bin/podman healthcheck run 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed.
Dec  1 04:05:33 np0005540697 podman[172553]: 2025-12-01 09:05:33.789811655 +0000 UTC m=+0.104033965 container init 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true)
Dec  1 04:05:33 np0005540697 multipathd[172568]: + sudo -E kolla_set_configs
Dec  1 04:05:33 np0005540697 podman[172553]: 2025-12-01 09:05:33.823305145 +0000 UTC m=+0.137527445 container start 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec  1 04:05:33 np0005540697 podman[172553]: multipathd
Dec  1 04:05:33 np0005540697 systemd[1]: Started multipathd container.
Dec  1 04:05:33 np0005540697 multipathd[172568]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec  1 04:05:33 np0005540697 multipathd[172568]: INFO:__main__:Validating config file
Dec  1 04:05:33 np0005540697 multipathd[172568]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec  1 04:05:33 np0005540697 multipathd[172568]: INFO:__main__:Writing out command to execute
Dec  1 04:05:33 np0005540697 multipathd[172568]: ++ cat /run_command
Dec  1 04:05:33 np0005540697 multipathd[172568]: + CMD='/usr/sbin/multipathd -d'
Dec  1 04:05:33 np0005540697 multipathd[172568]: + ARGS=
Dec  1 04:05:33 np0005540697 multipathd[172568]: + sudo kolla_copy_cacerts
Dec  1 04:05:33 np0005540697 multipathd[172568]: + [[ ! -n '' ]]
Dec  1 04:05:33 np0005540697 multipathd[172568]: + . kolla_extend_start
Dec  1 04:05:33 np0005540697 multipathd[172568]: Running command: '/usr/sbin/multipathd -d'
Dec  1 04:05:33 np0005540697 multipathd[172568]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Dec  1 04:05:33 np0005540697 multipathd[172568]: + umask 0022
Dec  1 04:05:33 np0005540697 multipathd[172568]: + exec /usr/sbin/multipathd -d
Dec  1 04:05:33 np0005540697 podman[172575]: 2025-12-01 09:05:33.898806304 +0000 UTC m=+0.064597726 container health_status 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251125)
Dec  1 04:05:33 np0005540697 systemd[1]: 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed-2e91b233e3336853.service: Main process exited, code=exited, status=1/FAILURE
Dec  1 04:05:33 np0005540697 systemd[1]: 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed-2e91b233e3336853.service: Failed with result 'exit-code'.
Dec  1 04:05:33 np0005540697 multipathd[172568]: 3173.657898 | --------start up--------
Dec  1 04:05:33 np0005540697 multipathd[172568]: 3173.657911 | read /etc/multipath.conf
Dec  1 04:05:33 np0005540697 multipathd[172568]: 3173.662837 | path checkers start up
Dec  1 04:05:34 np0005540697 python3.9[172759]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:05:35 np0005540697 python3.9[172911]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec  1 04:05:36 np0005540697 python3.9[173063]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Dec  1 04:05:36 np0005540697 kernel: Key type psk registered
Dec  1 04:05:36 np0005540697 python3.9[173226]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:05:37 np0005540697 python3.9[173349]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764579936.3477345-630-106249912228214/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:05:38 np0005540697 systemd[1]: virtnodedevd.service: Deactivated successfully.
Dec  1 04:05:38 np0005540697 python3.9[173501]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:05:38 np0005540697 python3.9[173654]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  1 04:05:39 np0005540697 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Dec  1 04:05:39 np0005540697 systemd[1]: Stopped Load Kernel Modules.
Dec  1 04:05:39 np0005540697 systemd[1]: Stopping Load Kernel Modules...
Dec  1 04:05:39 np0005540697 systemd[1]: Starting Load Kernel Modules...
Dec  1 04:05:39 np0005540697 systemd[1]: Finished Load Kernel Modules.
Dec  1 04:05:39 np0005540697 systemd[1]: virtproxyd.service: Deactivated successfully.
Dec  1 04:05:39 np0005540697 python3.9[173811]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  1 04:05:40 np0005540697 systemd[1]: virtqemud.service: Deactivated successfully.
Dec  1 04:05:41 np0005540697 systemd[1]: virtsecretd.service: Deactivated successfully.
Dec  1 04:05:42 np0005540697 systemd[1]: Reloading.
Dec  1 04:05:42 np0005540697 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 04:05:42 np0005540697 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 04:05:42 np0005540697 systemd[1]: Reloading.
Dec  1 04:05:42 np0005540697 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 04:05:42 np0005540697 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 04:05:43 np0005540697 systemd-logind[792]: Watching system buttons on /dev/input/event0 (Power Button)
Dec  1 04:05:43 np0005540697 systemd-logind[792]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Dec  1 04:05:43 np0005540697 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  1 04:05:43 np0005540697 systemd[1]: Starting man-db-cache-update.service...
Dec  1 04:05:43 np0005540697 systemd[1]: Reloading.
Dec  1 04:05:43 np0005540697 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 04:05:43 np0005540697 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 04:05:43 np0005540697 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  1 04:05:45 np0005540697 python3.9[175206]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  1 04:05:45 np0005540697 iscsid[163631]: iscsid shutting down.
Dec  1 04:05:45 np0005540697 systemd[1]: Stopping Open-iSCSI...
Dec  1 04:05:45 np0005540697 systemd[1]: iscsid.service: Deactivated successfully.
Dec  1 04:05:45 np0005540697 systemd[1]: Stopped Open-iSCSI.
Dec  1 04:05:45 np0005540697 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Dec  1 04:05:45 np0005540697 systemd[1]: Starting Open-iSCSI...
Dec  1 04:05:45 np0005540697 systemd[1]: Started Open-iSCSI.
Dec  1 04:05:45 np0005540697 python3.9[175420]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 04:05:46 np0005540697 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  1 04:05:46 np0005540697 systemd[1]: Finished man-db-cache-update.service.
Dec  1 04:05:46 np0005540697 systemd[1]: man-db-cache-update.service: Consumed 1.894s CPU time.
Dec  1 04:05:46 np0005540697 systemd[1]: run-r7cdfd2605c5b4855beec088cd1950829.service: Deactivated successfully.
Dec  1 04:05:46 np0005540697 python3.9[175576]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:05:48 np0005540697 python3.9[175729]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  1 04:05:48 np0005540697 systemd[1]: Reloading.
Dec  1 04:05:48 np0005540697 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 04:05:48 np0005540697 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 04:05:49 np0005540697 python3.9[175914]: ansible-ansible.builtin.service_facts Invoked
Dec  1 04:05:49 np0005540697 network[175931]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  1 04:05:49 np0005540697 network[175932]: 'network-scripts' will be removed from distribution in near future.
Dec  1 04:05:49 np0005540697 network[175933]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  1 04:05:53 np0005540697 python3.9[176207]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 04:05:54 np0005540697 python3.9[176361]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 04:05:55 np0005540697 python3.9[176514]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 04:05:56 np0005540697 python3.9[176667]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 04:05:56 np0005540697 python3.9[176821]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 04:05:57 np0005540697 python3.9[176974]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 04:05:58 np0005540697 python3.9[177127]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 04:05:59 np0005540697 python3.9[177280]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 04:06:00 np0005540697 python3.9[177433]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:06:01 np0005540697 python3.9[177585]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:06:01 np0005540697 podman[177691]: 2025-12-01 09:06:01.691070145 +0000 UTC m=+0.061191216 container health_status f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125)
Dec  1 04:06:02 np0005540697 python3.9[177758]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:06:02 np0005540697 python3.9[177910]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:06:03 np0005540697 python3.9[178062]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:06:03 np0005540697 podman[178184]: 2025-12-01 09:06:03.714085306 +0000 UTC m=+0.085391441 container health_status 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  1 04:06:03 np0005540697 python3.9[178239]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:06:04 np0005540697 podman[178364]: 2025-12-01 09:06:04.347489093 +0000 UTC m=+0.060737987 container health_status 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec  1 04:06:04 np0005540697 python3.9[178410]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:06:05 np0005540697 python3.9[178562]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:06:06 np0005540697 python3.9[178714]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:06:06 np0005540697 python3.9[178866]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:06:07 np0005540697 python3.9[179018]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:06:08 np0005540697 python3.9[179170]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:06:08 np0005540697 python3.9[179322]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:06:09 np0005540697 python3.9[179474]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:06:10 np0005540697 python3.9[179626]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:06:10 np0005540697 python3.9[179778]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:06:11 np0005540697 python3.9[179930]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:06:12 np0005540697 python3.9[180082]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec  1 04:06:13 np0005540697 python3.9[180234]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  1 04:06:13 np0005540697 systemd[1]: Reloading.
Dec  1 04:06:13 np0005540697 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 04:06:13 np0005540697 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 04:06:14 np0005540697 python3.9[180420]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:06:15 np0005540697 python3.9[180573]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:06:16 np0005540697 python3.9[180728]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:06:17 np0005540697 python3.9[180881]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:06:17 np0005540697 python3.9[181034]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:06:18 np0005540697 python3.9[181187]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:06:18 np0005540697 python3.9[181340]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:06:19 np0005540697 python3.9[181493]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:06:21 np0005540697 python3.9[181648]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 04:06:21 np0005540697 python3.9[181800]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 04:06:22 np0005540697 python3.9[181952]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 04:06:23 np0005540697 python3.9[182104]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 04:06:24 np0005540697 python3.9[182256]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 04:06:24 np0005540697 python3.9[182408]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 04:06:25 np0005540697 python3.9[182560]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 04:06:26 np0005540697 python3.9[182712]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  1 04:06:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 09:06:26.476 106659 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 04:06:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 09:06:26.477 106659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 04:06:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 09:06:26.477 106659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 04:06:26 np0005540697 python3.9[182864]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  1 04:06:27 np0005540697 python3.9[183016]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  1 04:06:32 np0005540697 podman[183141]: 2025-12-01 09:06:32.218383508 +0000 UTC m=+0.065674201 container health_status f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  1 04:06:32 np0005540697 python3.9[183185]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Dec  1 04:06:33 np0005540697 python3.9[183341]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec  1 04:06:34 np0005540697 podman[183472]: 2025-12-01 09:06:34.291182504 +0000 UTC m=+0.105932879 container health_status 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Dec  1 04:06:34 np0005540697 python3.9[183522]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec  1 04:06:34 np0005540697 podman[183529]: 2025-12-01 09:06:34.594882769 +0000 UTC m=+0.067384120 container health_status 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true)
Dec  1 04:06:36 np0005540697 systemd-logind[792]: New session 24 of user zuul.
Dec  1 04:06:36 np0005540697 systemd[1]: Started Session 24 of User zuul.
Dec  1 04:06:36 np0005540697 systemd[1]: session-24.scope: Deactivated successfully.
Dec  1 04:06:36 np0005540697 systemd-logind[792]: Session 24 logged out. Waiting for processes to exit.
Dec  1 04:06:36 np0005540697 systemd-logind[792]: Removed session 24.
Dec  1 04:06:37 np0005540697 python3.9[183734]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:06:37 np0005540697 python3.9[183855]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764579996.6568308-1229-235681307578582/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  1 04:06:38 np0005540697 python3.9[184005]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:06:38 np0005540697 python3.9[184081]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  1 04:06:39 np0005540697 python3.9[184231]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:06:39 np0005540697 python3.9[184352]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764579998.8359764-1229-255107319727852/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  1 04:06:40 np0005540697 python3.9[184502]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:06:41 np0005540697 python3.9[184623]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764580000.0049946-1229-142302726843591/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  1 04:06:41 np0005540697 python3.9[184773]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:06:42 np0005540697 python3.9[184894]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764580001.2682123-1229-50376114482237/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  1 04:06:43 np0005540697 python3.9[185044]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:06:43 np0005540697 python3.9[185165]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764580002.5746825-1229-245237569759915/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  1 04:06:44 np0005540697 python3.9[185317]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:06:45 np0005540697 python3.9[185469]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:06:45 np0005540697 python3.9[185621]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 04:06:46 np0005540697 python3.9[185773]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:06:47 np0005540697 python3.9[185896]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1764580006.1220381-1336-237951034036229/.source _original_basename=.8ecj44nj follow=False checksum=3bb4a400763421bdd95d0efaa9104d19cb3e2e08 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Dec  1 04:06:47 np0005540697 python3.9[186048]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 04:06:48 np0005540697 python3.9[186200]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:06:49 np0005540697 python3.9[186321]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764580008.1166687-1362-196300676075788/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=211ffd0bca4b407eb4de45a749ef70116a7806fd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  1 04:06:50 np0005540697 python3.9[186472]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:06:50 np0005540697 python3.9[186593]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764580009.594283-1377-271914600879896/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  1 04:06:51 np0005540697 python3.9[186745]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Dec  1 04:06:52 np0005540697 python3.9[186897]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  1 04:06:53 np0005540697 python3[187049]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False
Dec  1 04:06:53 np0005540697 podman[187085]: 2025-12-01 09:06:53.522197047 +0000 UTC m=+0.068924226 container create aa487ecc35bce760da2515e5327c27adc3b13249d986942b440a9b03460ab355 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, container_name=nova_compute_init, io.buildah.version=1.41.3)
Dec  1 04:06:53 np0005540697 podman[187085]: 2025-12-01 09:06:53.490146651 +0000 UTC m=+0.036873830 image pull 5571c1b2140c835f70406e4553b3b44135b9c9b4eb673345cbd571460c5d59a3 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Dec  1 04:06:53 np0005540697 python3[187049]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Dec  1 04:06:54 np0005540697 python3.9[187275]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 04:06:55 np0005540697 python3.9[187429]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Dec  1 04:06:56 np0005540697 python3.9[187581]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  1 04:06:57 np0005540697 python3[187733]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False
Dec  1 04:06:57 np0005540697 podman[187770]: 2025-12-01 09:06:57.615252487 +0000 UTC m=+0.067073563 container create f8c1b8c93d972d7f632881e088593a05a338d9387e66dc713caf490f08b04912 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, maintainer=OpenStack Kubernetes Operator team, container_name=nova_compute, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=edpm, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, tcib_managed=true, org.label-schema.license=GPLv2)
Dec  1 04:06:57 np0005540697 podman[187770]: 2025-12-01 09:06:57.585658277 +0000 UTC m=+0.037479333 image pull 5571c1b2140c835f70406e4553b3b44135b9c9b4eb673345cbd571460c5d59a3 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Dec  1 04:06:57 np0005540697 python3[187733]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Dec  1 04:06:58 np0005540697 python3.9[187960]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 04:06:59 np0005540697 python3.9[188114]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:07:00 np0005540697 python3.9[188265]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764580019.5999286-1469-267079160128135/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:07:00 np0005540697 python3.9[188341]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  1 04:07:00 np0005540697 systemd[1]: Reloading.
Dec  1 04:07:01 np0005540697 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 04:07:01 np0005540697 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 04:07:01 np0005540697 python3.9[188453]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 04:07:01 np0005540697 systemd[1]: Reloading.
Dec  1 04:07:01 np0005540697 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 04:07:01 np0005540697 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 04:07:02 np0005540697 systemd[1]: Starting nova_compute container...
Dec  1 04:07:02 np0005540697 systemd[1]: Started libcrun container.
Dec  1 04:07:02 np0005540697 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c16192c212c63f91de88878f1bbbf6ec6987fab22bbce4ecae4d84e4f5ba8d18/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Dec  1 04:07:02 np0005540697 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c16192c212c63f91de88878f1bbbf6ec6987fab22bbce4ecae4d84e4f5ba8d18/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec  1 04:07:02 np0005540697 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c16192c212c63f91de88878f1bbbf6ec6987fab22bbce4ecae4d84e4f5ba8d18/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Dec  1 04:07:02 np0005540697 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c16192c212c63f91de88878f1bbbf6ec6987fab22bbce4ecae4d84e4f5ba8d18/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec  1 04:07:02 np0005540697 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c16192c212c63f91de88878f1bbbf6ec6987fab22bbce4ecae4d84e4f5ba8d18/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec  1 04:07:02 np0005540697 podman[188494]: 2025-12-01 09:07:02.255807581 +0000 UTC m=+0.107067955 container init f8c1b8c93d972d7f632881e088593a05a338d9387e66dc713caf490f08b04912 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=nova_compute, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm)
Dec  1 04:07:02 np0005540697 podman[188494]: 2025-12-01 09:07:02.270157845 +0000 UTC m=+0.121418189 container start f8c1b8c93d972d7f632881e088593a05a338d9387e66dc713caf490f08b04912 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_id=edpm, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=nova_compute, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']})
Dec  1 04:07:02 np0005540697 podman[188494]: nova_compute
Dec  1 04:07:02 np0005540697 systemd[1]: Started nova_compute container.
Dec  1 04:07:02 np0005540697 nova_compute[188509]: + sudo -E kolla_set_configs
Dec  1 04:07:02 np0005540697 podman[188512]: 2025-12-01 09:07:02.342852509 +0000 UTC m=+0.089503556 container health_status f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  1 04:07:02 np0005540697 nova_compute[188509]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec  1 04:07:02 np0005540697 nova_compute[188509]: INFO:__main__:Validating config file
Dec  1 04:07:02 np0005540697 nova_compute[188509]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec  1 04:07:02 np0005540697 nova_compute[188509]: INFO:__main__:Copying service configuration files
Dec  1 04:07:02 np0005540697 nova_compute[188509]: INFO:__main__:Deleting /etc/nova/nova.conf
Dec  1 04:07:02 np0005540697 nova_compute[188509]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Dec  1 04:07:02 np0005540697 nova_compute[188509]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Dec  1 04:07:02 np0005540697 nova_compute[188509]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Dec  1 04:07:02 np0005540697 nova_compute[188509]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Dec  1 04:07:02 np0005540697 nova_compute[188509]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Dec  1 04:07:02 np0005540697 nova_compute[188509]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Dec  1 04:07:02 np0005540697 nova_compute[188509]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Dec  1 04:07:02 np0005540697 nova_compute[188509]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Dec  1 04:07:02 np0005540697 nova_compute[188509]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec  1 04:07:02 np0005540697 nova_compute[188509]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec  1 04:07:02 np0005540697 nova_compute[188509]: INFO:__main__:Deleting /etc/ceph
Dec  1 04:07:02 np0005540697 nova_compute[188509]: INFO:__main__:Creating directory /etc/ceph
Dec  1 04:07:02 np0005540697 nova_compute[188509]: INFO:__main__:Setting permission for /etc/ceph
Dec  1 04:07:02 np0005540697 nova_compute[188509]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Dec  1 04:07:02 np0005540697 nova_compute[188509]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec  1 04:07:02 np0005540697 nova_compute[188509]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Dec  1 04:07:02 np0005540697 nova_compute[188509]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec  1 04:07:02 np0005540697 nova_compute[188509]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Dec  1 04:07:02 np0005540697 nova_compute[188509]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Dec  1 04:07:02 np0005540697 nova_compute[188509]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Dec  1 04:07:02 np0005540697 nova_compute[188509]: INFO:__main__:Writing out command to execute
Dec  1 04:07:02 np0005540697 nova_compute[188509]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Dec  1 04:07:02 np0005540697 nova_compute[188509]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec  1 04:07:02 np0005540697 nova_compute[188509]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec  1 04:07:02 np0005540697 nova_compute[188509]: ++ cat /run_command
Dec  1 04:07:02 np0005540697 nova_compute[188509]: + CMD=nova-compute
Dec  1 04:07:02 np0005540697 nova_compute[188509]: + ARGS=
Dec  1 04:07:02 np0005540697 nova_compute[188509]: + sudo kolla_copy_cacerts
Dec  1 04:07:02 np0005540697 nova_compute[188509]: + [[ ! -n '' ]]
Dec  1 04:07:02 np0005540697 nova_compute[188509]: + . kolla_extend_start
Dec  1 04:07:02 np0005540697 nova_compute[188509]: + echo 'Running command: '\''nova-compute'\'''
Dec  1 04:07:02 np0005540697 nova_compute[188509]: Running command: 'nova-compute'
Dec  1 04:07:02 np0005540697 nova_compute[188509]: + umask 0022
Dec  1 04:07:02 np0005540697 nova_compute[188509]: + exec nova-compute
Dec  1 04:07:03 np0005540697 python3.9[188689]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 04:07:04 np0005540697 python3.9[188840]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 04:07:04 np0005540697 nova_compute[188509]: 2025-12-01 09:07:04.478 188522 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Dec  1 04:07:04 np0005540697 nova_compute[188509]: 2025-12-01 09:07:04.478 188522 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Dec  1 04:07:04 np0005540697 nova_compute[188509]: 2025-12-01 09:07:04.479 188522 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Dec  1 04:07:04 np0005540697 nova_compute[188509]: 2025-12-01 09:07:04.479 188522 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Dec  1 04:07:04 np0005540697 podman[188966]: 2025-12-01 09:07:04.608760884 +0000 UTC m=+0.101105766 container health_status 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 04:07:04 np0005540697 nova_compute[188509]: 2025-12-01 09:07:04.621 188522 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 04:07:04 np0005540697 nova_compute[188509]: 2025-12-01 09:07:04.643 188522 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.022s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 04:07:04 np0005540697 nova_compute[188509]: 2025-12-01 09:07:04.644 188522 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Dec  1 04:07:04 np0005540697 podman[189017]: 2025-12-01 09:07:04.714392595 +0000 UTC m=+0.066292736 container health_status 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  1 04:07:04 np0005540697 python3.9[189007]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.249 188522 INFO nova.virt.driver [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.362 188522 INFO nova.compute.provider_config [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.378 188522 DEBUG oslo_concurrency.lockutils [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.378 188522 DEBUG oslo_concurrency.lockutils [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.379 188522 DEBUG oslo_concurrency.lockutils [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.379 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.379 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.379 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.379 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.380 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.380 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.380 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.380 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.380 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.380 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.380 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.381 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.381 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.381 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.381 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.381 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.381 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.381 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.382 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.382 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.382 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.382 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.382 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.382 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.383 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.383 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.383 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.383 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.383 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.383 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.383 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.384 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.384 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.384 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.384 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.384 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.384 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.385 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.385 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.385 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.385 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.385 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.386 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.386 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.386 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.386 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.386 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.386 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.387 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.387 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.387 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.387 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.387 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.388 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.388 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.388 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.388 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.388 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.388 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.389 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.389 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.389 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.389 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.389 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.390 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.390 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.390 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.390 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.390 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.390 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.391 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.391 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.391 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.391 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.391 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.391 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.391 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.392 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.392 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.392 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.392 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.392 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.392 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.392 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.393 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.393 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.393 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.393 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.393 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.393 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.393 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.394 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.394 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.394 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.394 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.394 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.394 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.394 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.395 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.395 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.395 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.395 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.395 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.395 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.395 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.396 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.396 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.396 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.396 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.396 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.396 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.396 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.397 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.397 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.397 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.397 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.397 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.397 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.397 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.398 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.398 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.398 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.398 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.398 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.398 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.398 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.399 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.399 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.399 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.399 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.399 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.399 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.400 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.400 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.400 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.400 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.400 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.400 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.401 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.401 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.401 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.401 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.401 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.401 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.401 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.402 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.402 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.402 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.402 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.402 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.402 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.402 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.403 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.403 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.403 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.403 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.403 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.403 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.404 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.404 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.404 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.404 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.404 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.404 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.404 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.404 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.405 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.405 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.405 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.405 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.405 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.405 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.406 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.406 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.406 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.406 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.406 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.406 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.406 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.407 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.407 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.407 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.407 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.407 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.407 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.408 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.408 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.408 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.408 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.408 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.408 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.408 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.409 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.409 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.409 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.409 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.409 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.409 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.409 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.410 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.410 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.410 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.410 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.410 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.410 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.411 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.411 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.411 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.411 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.411 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.411 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.411 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.411 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.412 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.412 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.412 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.412 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.412 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.412 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.412 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.413 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.413 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.413 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.413 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.413 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.413 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.414 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.414 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.414 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.414 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.414 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.414 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.414 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.415 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.415 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.415 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.415 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.415 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.415 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.416 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.416 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.416 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.416 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.416 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.416 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.416 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.417 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.417 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.417 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.417 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.417 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.417 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.417 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.418 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.418 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.418 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.418 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.418 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.418 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.419 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.419 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.419 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.419 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.419 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.419 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.419 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.420 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.420 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.420 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.420 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.420 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.420 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.421 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.421 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.421 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.421 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.421 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.421 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.421 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.422 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.422 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.422 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.422 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.422 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.423 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.423 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.423 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.423 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.423 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.423 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.424 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.424 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.424 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.424 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.424 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.424 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.424 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.425 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.425 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.425 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.425 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.425 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.425 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.426 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.426 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.426 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.426 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.426 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.426 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.426 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.427 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.427 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.427 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.427 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.427 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.427 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.427 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.428 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.428 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.428 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.428 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.428 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.428 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.428 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.429 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.429 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.429 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.429 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.429 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.429 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.430 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.430 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.430 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.430 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.430 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.430 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.430 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.431 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.431 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.431 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.431 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.431 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.431 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.432 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.432 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.432 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.432 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.432 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.432 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.433 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.433 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.433 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.433 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.433 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.433 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.434 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.434 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.434 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.434 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.434 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.434 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.435 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.435 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.435 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.435 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.435 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.436 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.436 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.436 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.436 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.436 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.436 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.436 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.437 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.437 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.437 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.437 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.437 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.437 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.437 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.437 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.438 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.438 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.438 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.438 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.438 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.438 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.438 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.439 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.439 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.439 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.439 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.439 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.440 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.440 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.440 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.440 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.440 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.441 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.441 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.441 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.441 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.441 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.441 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.442 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.442 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.442 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.442 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.442 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.443 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.443 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.443 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.443 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.443 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.443 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.444 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.444 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.444 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.444 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.444 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.445 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.445 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.445 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.445 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.445 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.445 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.446 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.446 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.446 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.446 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.446 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.446 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.447 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.447 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.447 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.447 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.447 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.448 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.448 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.448 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.448 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.448 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.448 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.449 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.449 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.449 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.449 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.449 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.450 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.450 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.450 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.450 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.450 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.451 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.451 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.451 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.451 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.451 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.451 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.451 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.452 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.452 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.452 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.452 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.452 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.452 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.452 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.453 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.453 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] libvirt.images_rbd_ceph_conf   =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.453 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.453 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.453 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] libvirt.images_rbd_glance_store_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.453 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] libvirt.images_rbd_pool        = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.453 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] libvirt.images_type            = qcow2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.454 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.454 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.454 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.454 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.454 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.454 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.454 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.455 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.455 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.455 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.455 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.455 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.455 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.455 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.456 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.456 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.456 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.456 188522 WARNING oslo_config.cfg [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Dec  1 04:07:05 np0005540697 nova_compute[188509]: live_migration_uri is deprecated for removal in favor of two other options that
Dec  1 04:07:05 np0005540697 nova_compute[188509]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Dec  1 04:07:05 np0005540697 nova_compute[188509]: and ``live_migration_inbound_addr`` respectively.
Dec  1 04:07:05 np0005540697 nova_compute[188509]: ).  Its value may be silently ignored in the future.#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.456 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.456 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.457 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.457 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.457 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.457 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.457 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.457 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.458 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.458 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.458 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.458 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.458 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.458 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.458 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.459 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.459 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.459 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.459 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] libvirt.rbd_secret_uuid        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.459 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] libvirt.rbd_user               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.459 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.459 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.460 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.460 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.460 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.460 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.460 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.460 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.460 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.461 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.461 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.461 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.461 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.461 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.461 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.462 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.462 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.462 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.462 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.462 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.462 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.463 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.463 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.463 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.463 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.463 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.463 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.463 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.464 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.464 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.464 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.464 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.464 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.464 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.464 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.465 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.465 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.465 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.465 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.465 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.465 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.465 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.466 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.466 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.466 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.466 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.466 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.467 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.467 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.467 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.467 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.467 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.467 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.467 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.468 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.468 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.468 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.468 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.468 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.468 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.469 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.469 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.469 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.469 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.469 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.469 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.470 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.470 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.470 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.470 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.470 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.470 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.471 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.471 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.471 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.471 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.471 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.471 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.472 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.472 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.472 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.472 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.472 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.472 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.473 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.473 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.473 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.473 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.473 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.473 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.474 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.474 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.474 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.474 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.474 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.474 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.475 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.475 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.475 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.475 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.475 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.476 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.476 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.476 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.476 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.476 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.477 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.477 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.477 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.477 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.477 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.477 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.478 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.478 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.478 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.478 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.478 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.479 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.479 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.479 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.479 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.479 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.479 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.480 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.480 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.480 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.480 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.480 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.480 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.480 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.481 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.481 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.481 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.481 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.481 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.481 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.482 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.482 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.482 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.482 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.482 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.483 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.483 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.483 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.483 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.483 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.483 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.484 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.484 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.484 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.484 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.484 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.485 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.485 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.485 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.485 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.485 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.485 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.485 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.486 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.486 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.486 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.486 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.487 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.487 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.487 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.487 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.487 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.487 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.487 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.488 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.488 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.488 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.488 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.488 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.488 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.488 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.489 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.489 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.489 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.489 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.489 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.489 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.490 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.490 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.490 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.490 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.490 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.490 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.490 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.491 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.491 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.491 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.491 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.491 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.491 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.491 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.492 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.492 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.492 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.492 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.492 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.492 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.492 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.493 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.493 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.493 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.493 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.493 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.493 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.493 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.494 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.494 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.494 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.494 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.494 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.494 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.494 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.495 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.495 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.495 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.495 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.495 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.495 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.495 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.496 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.496 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.496 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.496 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.496 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.497 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.497 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.497 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.497 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.497 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.497 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.498 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.498 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.498 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.498 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.498 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.498 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.498 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.499 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.499 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.499 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.499 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.499 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.499 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.500 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.500 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.500 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.500 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.500 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.501 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.501 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.501 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.501 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.501 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.501 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.502 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.502 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.502 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.502 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.502 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.503 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.503 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.503 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.503 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.503 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.504 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.504 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.504 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.504 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.504 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.505 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.505 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.505 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.505 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.505 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.505 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.505 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.506 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.506 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.506 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.506 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.506 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.506 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.507 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.507 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.507 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.507 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.507 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.507 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.507 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.508 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.508 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.508 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.508 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.508 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.508 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.508 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.509 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.509 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.509 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.509 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.509 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.509 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.509 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.509 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.510 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.510 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.510 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.510 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.510 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.510 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.510 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.511 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.511 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.511 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.511 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.511 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.511 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.511 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.512 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.512 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.512 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.512 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.512 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.512 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.512 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.513 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.513 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.513 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.513 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.513 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.513 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.513 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.513 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.514 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.514 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.514 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.514 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.514 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.514 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.514 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.515 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.515 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.515 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.515 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.515 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.515 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.515 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.515 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.516 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.516 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.516 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.516 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.516 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.516 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.516 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.516 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.517 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.517 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.517 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.517 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.517 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.517 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.517 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.518 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.518 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.518 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.518 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.518 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.518 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.518 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.519 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.519 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.519 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.519 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.519 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.519 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.519 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.520 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.520 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.520 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.520 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.520 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.520 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.520 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.520 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.521 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.521 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.521 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.521 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.521 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.521 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.521 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.522 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.522 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.522 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.522 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.522 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.522 188522 DEBUG oslo_service.service [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.523 188522 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.539 188522 DEBUG nova.virt.libvirt.host [None req-a8567ccf-cd0c-4a0c-b382-dadb486a7f53 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.539 188522 DEBUG nova.virt.libvirt.host [None req-a8567ccf-cd0c-4a0c-b382-dadb486a7f53 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.540 188522 DEBUG nova.virt.libvirt.host [None req-a8567ccf-cd0c-4a0c-b382-dadb486a7f53 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.540 188522 DEBUG nova.virt.libvirt.host [None req-a8567ccf-cd0c-4a0c-b382-dadb486a7f53 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Dec  1 04:07:05 np0005540697 systemd[1]: Starting libvirt QEMU daemon...
Dec  1 04:07:05 np0005540697 systemd[1]: Started libvirt QEMU daemon.
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.611 188522 DEBUG nova.virt.libvirt.host [None req-a8567ccf-cd0c-4a0c-b382-dadb486a7f53 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f023a1bf730> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.613 188522 DEBUG nova.virt.libvirt.host [None req-a8567ccf-cd0c-4a0c-b382-dadb486a7f53 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f023a1bf730> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.614 188522 INFO nova.virt.libvirt.driver [None req-a8567ccf-cd0c-4a0c-b382-dadb486a7f53 - - - - - -] Connection event '1' reason 'None'#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.632 188522 WARNING nova.virt.libvirt.driver [None req-a8567ccf-cd0c-4a0c-b382-dadb486a7f53 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Dec  1 04:07:05 np0005540697 nova_compute[188509]: 2025-12-01 09:07:05.633 188522 DEBUG nova.virt.libvirt.volume.mount [None req-a8567ccf-cd0c-4a0c-b382-dadb486a7f53 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Dec  1 04:07:05 np0005540697 python3.9[189189]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Dec  1 04:07:05 np0005540697 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  1 04:07:05 np0005540697 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  1 04:07:06 np0005540697 nova_compute[188509]: 2025-12-01 09:07:06.440 188522 INFO nova.virt.libvirt.host [None req-a8567ccf-cd0c-4a0c-b382-dadb486a7f53 - - - - - -] Libvirt host capabilities <capabilities>
Dec  1 04:07:06 np0005540697 nova_compute[188509]: 
Dec  1 04:07:06 np0005540697 nova_compute[188509]:  <host>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <uuid>8504d282-d8be-435b-9f17-042283c7909f</uuid>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <cpu>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <arch>x86_64</arch>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model>EPYC-Rome-v4</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <vendor>AMD</vendor>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <microcode version='16777317'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <signature family='23' model='49' stepping='0'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <maxphysaddr mode='emulate' bits='40'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <feature name='x2apic'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <feature name='tsc-deadline'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <feature name='osxsave'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <feature name='hypervisor'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <feature name='tsc_adjust'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <feature name='spec-ctrl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <feature name='stibp'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <feature name='arch-capabilities'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <feature name='ssbd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <feature name='cmp_legacy'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <feature name='topoext'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <feature name='virt-ssbd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <feature name='lbrv'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <feature name='tsc-scale'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <feature name='vmcb-clean'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <feature name='pause-filter'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <feature name='pfthreshold'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <feature name='svme-addr-chk'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <feature name='rdctl-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <feature name='skip-l1dfl-vmentry'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <feature name='mds-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <feature name='pschange-mc-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <pages unit='KiB' size='4'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <pages unit='KiB' size='2048'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <pages unit='KiB' size='1048576'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    </cpu>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <power_management>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <suspend_mem/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <suspend_disk/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <suspend_hybrid/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    </power_management>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <iommu support='no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <migration_features>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <live/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <uri_transports>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <uri_transport>tcp</uri_transport>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <uri_transport>rdma</uri_transport>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </uri_transports>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    </migration_features>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <topology>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <cells num='1'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <cell id='0'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:          <memory unit='KiB'>7864324</memory>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:          <pages unit='KiB' size='4'>1966081</pages>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:          <pages unit='KiB' size='2048'>0</pages>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:          <pages unit='KiB' size='1048576'>0</pages>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:          <distances>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:            <sibling id='0' value='10'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:          </distances>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:          <cpus num='8'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:          </cpus>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        </cell>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </cells>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    </topology>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <cache>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    </cache>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <secmodel>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model>selinux</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <doi>0</doi>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    </secmodel>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <secmodel>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model>dac</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <doi>0</doi>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <baselabel type='kvm'>+107:+107</baselabel>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <baselabel type='qemu'>+107:+107</baselabel>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    </secmodel>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:  </host>
Dec  1 04:07:06 np0005540697 nova_compute[188509]: 
Dec  1 04:07:06 np0005540697 nova_compute[188509]:  <guest>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <os_type>hvm</os_type>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <arch name='i686'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <wordsize>32</wordsize>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <domain type='qemu'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <domain type='kvm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    </arch>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <features>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <pae/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <nonpae/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <acpi default='on' toggle='yes'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <apic default='on' toggle='no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <cpuselection/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <deviceboot/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <disksnapshot default='on' toggle='no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <externalSnapshot/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    </features>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:  </guest>
Dec  1 04:07:06 np0005540697 nova_compute[188509]: 
Dec  1 04:07:06 np0005540697 nova_compute[188509]:  <guest>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <os_type>hvm</os_type>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <arch name='x86_64'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <wordsize>64</wordsize>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <domain type='qemu'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <domain type='kvm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    </arch>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <features>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <acpi default='on' toggle='yes'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <apic default='on' toggle='no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <cpuselection/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <deviceboot/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <disksnapshot default='on' toggle='no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <externalSnapshot/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    </features>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:  </guest>
Dec  1 04:07:06 np0005540697 nova_compute[188509]: 
Dec  1 04:07:06 np0005540697 nova_compute[188509]: </capabilities>
Dec  1 04:07:06 np0005540697 nova_compute[188509]: #033[00m
Dec  1 04:07:06 np0005540697 nova_compute[188509]: 2025-12-01 09:07:06.447 188522 DEBUG nova.virt.libvirt.host [None req-a8567ccf-cd0c-4a0c-b382-dadb486a7f53 - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Dec  1 04:07:06 np0005540697 nova_compute[188509]: 2025-12-01 09:07:06.468 188522 DEBUG nova.virt.libvirt.host [None req-a8567ccf-cd0c-4a0c-b382-dadb486a7f53 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Dec  1 04:07:06 np0005540697 nova_compute[188509]: <domainCapabilities>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:  <path>/usr/libexec/qemu-kvm</path>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:  <domain>kvm</domain>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:  <machine>pc-i440fx-rhel7.6.0</machine>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:  <arch>i686</arch>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:  <vcpu max='240'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:  <iothreads supported='yes'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:  <os supported='yes'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <enum name='firmware'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <loader supported='yes'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <enum name='type'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>rom</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>pflash</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </enum>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <enum name='readonly'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>yes</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>no</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </enum>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <enum name='secure'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>no</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </enum>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    </loader>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:  </os>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:  <cpu>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <mode name='host-passthrough' supported='yes'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <enum name='hostPassthroughMigratable'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>on</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>off</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </enum>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    </mode>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <mode name='maximum' supported='yes'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <enum name='maximumMigratable'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>on</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>off</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </enum>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    </mode>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <mode name='host-model' supported='yes'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model fallback='forbid'>EPYC-Rome</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <vendor>AMD</vendor>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <maxphysaddr mode='passthrough' limit='40'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <feature policy='require' name='x2apic'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <feature policy='require' name='tsc-deadline'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <feature policy='require' name='hypervisor'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <feature policy='require' name='tsc_adjust'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <feature policy='require' name='spec-ctrl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <feature policy='require' name='stibp'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <feature policy='require' name='ssbd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <feature policy='require' name='cmp_legacy'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <feature policy='require' name='overflow-recov'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <feature policy='require' name='succor'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <feature policy='require' name='ibrs'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <feature policy='require' name='amd-ssbd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <feature policy='require' name='virt-ssbd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <feature policy='require' name='lbrv'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <feature policy='require' name='tsc-scale'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <feature policy='require' name='vmcb-clean'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <feature policy='require' name='flushbyasid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <feature policy='require' name='pause-filter'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <feature policy='require' name='pfthreshold'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <feature policy='require' name='svme-addr-chk'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <feature policy='require' name='lfence-always-serializing'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <feature policy='disable' name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    </mode>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <mode name='custom' supported='yes'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Broadwell'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Broadwell-IBRS'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Broadwell-noTSX'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Broadwell-noTSX-IBRS'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Broadwell-v1'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Broadwell-v2'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Broadwell-v3'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Broadwell-v4'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Cascadelake-Server'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Cascadelake-Server-noTSX'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ibrs-all'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Cascadelake-Server-v1'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Cascadelake-Server-v2'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ibrs-all'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Cascadelake-Server-v3'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ibrs-all'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Cascadelake-Server-v4'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ibrs-all'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Cascadelake-Server-v5'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ibrs-all'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Cooperlake'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-bf16'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ibrs-all'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='taa-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Cooperlake-v1'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-bf16'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ibrs-all'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='taa-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Cooperlake-v2'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-bf16'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ibrs-all'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='taa-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Denverton'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='mpx'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Denverton-v1'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='mpx'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Denverton-v2'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Denverton-v3'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Dhyana-v2'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='EPYC-Genoa'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='amd-psfd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='auto-ibrs'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-bf16'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bitalg'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512ifma'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='gfni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='la57'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='no-nested-data-bp'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='null-sel-clr-base'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='stibp-always-on'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vaes'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='EPYC-Genoa-v1'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='amd-psfd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='auto-ibrs'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-bf16'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bitalg'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512ifma'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='gfni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='la57'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='no-nested-data-bp'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='null-sel-clr-base'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='stibp-always-on'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vaes'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='EPYC-Milan'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='EPYC-Milan-v1'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='EPYC-Milan-v2'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='amd-psfd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='no-nested-data-bp'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='null-sel-clr-base'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='stibp-always-on'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vaes'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='EPYC-Rome'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='EPYC-Rome-v1'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='EPYC-Rome-v2'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='EPYC-Rome-v3'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='EPYC-v3'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='EPYC-v4'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='GraniteRapids'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='amx-bf16'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='amx-fp16'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='amx-int8'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='amx-tile'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx-vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-bf16'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-fp16'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bitalg'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512ifma'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='bus-lock-detect'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fbsdp-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrc'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrs'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fzrm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='gfni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ibrs-all'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='la57'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='mcdt-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pbrsb-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='prefetchiti'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='psdp-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='sbdr-ssdp-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='serialize'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='taa-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='tsx-ldtrk'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vaes'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xfd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='GraniteRapids-v1'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='amx-bf16'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='amx-fp16'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='amx-int8'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='amx-tile'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx-vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-bf16'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-fp16'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bitalg'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512ifma'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='bus-lock-detect'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fbsdp-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrc'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrs'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fzrm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='gfni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ibrs-all'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='la57'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='mcdt-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pbrsb-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='prefetchiti'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='psdp-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='sbdr-ssdp-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='serialize'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='taa-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='tsx-ldtrk'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vaes'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xfd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='GraniteRapids-v2'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='amx-bf16'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='amx-fp16'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='amx-int8'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='amx-tile'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx-vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx10'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx10-128'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx10-256'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx10-512'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-bf16'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-fp16'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bitalg'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512ifma'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='bus-lock-detect'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='cldemote'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fbsdp-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrc'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrs'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fzrm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='gfni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ibrs-all'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='la57'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='mcdt-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='movdir64b'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='movdiri'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pbrsb-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='prefetchiti'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='psdp-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='sbdr-ssdp-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='serialize'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ss'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='taa-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='tsx-ldtrk'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vaes'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xfd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Haswell'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Haswell-IBRS'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Haswell-noTSX'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Haswell-noTSX-IBRS'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Haswell-v1'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Haswell-v2'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Haswell-v3'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Haswell-v4'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Icelake-Server'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bitalg'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='gfni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='la57'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vaes'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Icelake-Server-noTSX'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bitalg'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='gfni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='la57'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vaes'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Icelake-Server-v1'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bitalg'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='gfni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='la57'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vaes'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Icelake-Server-v2'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bitalg'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='gfni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='la57'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vaes'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Icelake-Server-v3'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bitalg'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='gfni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ibrs-all'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='la57'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='taa-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vaes'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Icelake-Server-v4'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bitalg'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512ifma'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='gfni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ibrs-all'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='la57'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='taa-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vaes'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Icelake-Server-v5'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bitalg'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512ifma'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='gfni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ibrs-all'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='la57'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='taa-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vaes'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Icelake-Server-v6'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bitalg'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512ifma'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='gfni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ibrs-all'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='la57'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='taa-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vaes'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Icelake-Server-v7'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bitalg'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512ifma'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='gfni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ibrs-all'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='la57'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='taa-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vaes'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='IvyBridge'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='IvyBridge-IBRS'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='IvyBridge-v1'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='IvyBridge-v2'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='KnightsMill'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-4fmaps'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-4vnniw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512er'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512pf'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ss'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='KnightsMill-v1'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-4fmaps'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-4vnniw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512er'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512pf'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ss'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Opteron_G4'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fma4'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xop'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Opteron_G4-v1'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fma4'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xop'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Opteron_G5'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fma4'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='tbm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xop'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Opteron_G5-v1'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fma4'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='tbm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xop'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='SapphireRapids'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='amx-bf16'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='amx-int8'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='amx-tile'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx-vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-bf16'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-fp16'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bitalg'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512ifma'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='bus-lock-detect'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrc'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrs'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fzrm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='gfni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ibrs-all'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='la57'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='serialize'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='taa-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='tsx-ldtrk'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vaes'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xfd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='SapphireRapids-v1'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='amx-bf16'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='amx-int8'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='amx-tile'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx-vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-bf16'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-fp16'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bitalg'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512ifma'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='bus-lock-detect'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrc'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrs'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fzrm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='gfni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ibrs-all'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='la57'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='serialize'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='taa-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='tsx-ldtrk'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vaes'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xfd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='SapphireRapids-v2'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='amx-bf16'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='amx-int8'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='amx-tile'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx-vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-bf16'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-fp16'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bitalg'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512ifma'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='bus-lock-detect'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fbsdp-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrc'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrs'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fzrm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='gfni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ibrs-all'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='la57'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='psdp-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='sbdr-ssdp-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='serialize'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='taa-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='tsx-ldtrk'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vaes'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xfd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='SapphireRapids-v3'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='amx-bf16'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='amx-int8'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='amx-tile'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx-vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-bf16'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-fp16'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bitalg'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512ifma'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='bus-lock-detect'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='cldemote'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fbsdp-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrc'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrs'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fzrm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='gfni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ibrs-all'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='la57'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='movdir64b'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='movdiri'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='psdp-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='sbdr-ssdp-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='serialize'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ss'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='taa-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='tsx-ldtrk'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vaes'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xfd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='SierraForest'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx-ifma'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx-ne-convert'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx-vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx-vnni-int8'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='bus-lock-detect'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='cmpccxadd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fbsdp-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrs'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='gfni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ibrs-all'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='mcdt-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pbrsb-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='psdp-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='sbdr-ssdp-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='serialize'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vaes'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='SierraForest-v1'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx-ifma'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx-ne-convert'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx-vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx-vnni-int8'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='bus-lock-detect'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='cmpccxadd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fbsdp-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrs'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='gfni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ibrs-all'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='mcdt-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pbrsb-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='psdp-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='sbdr-ssdp-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='serialize'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vaes'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Skylake-Client'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Skylake-Client-IBRS'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Skylake-Client-v1'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Skylake-Client-v2'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Skylake-Client-v3'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Skylake-Client-v4'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Skylake-Server'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Skylake-Server-IBRS'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Skylake-Server-v1'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Skylake-Server-v2'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Skylake-Server-v3'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Skylake-Server-v4'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Skylake-Server-v5'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Snowridge'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='cldemote'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='core-capability'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='gfni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='movdir64b'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='movdiri'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='mpx'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='split-lock-detect'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Snowridge-v1'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='cldemote'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='core-capability'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='gfni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='movdir64b'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='movdiri'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='mpx'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='split-lock-detect'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Snowridge-v2'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='cldemote'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='core-capability'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='gfni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='movdir64b'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='movdiri'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='split-lock-detect'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Snowridge-v3'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='cldemote'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='core-capability'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='gfni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='movdir64b'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='movdiri'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='split-lock-detect'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Snowridge-v4'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='cldemote'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='gfni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='movdir64b'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='movdiri'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='athlon'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='3dnow'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='3dnowext'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='athlon-v1'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='3dnow'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='3dnowext'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='core2duo'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ss'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='core2duo-v1'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ss'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='coreduo'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ss'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='coreduo-v1'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ss'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='n270'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ss'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='n270-v1'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ss'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='phenom'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='3dnow'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='3dnowext'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='phenom-v1'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='3dnow'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='3dnowext'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    </mode>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:  </cpu>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:  <memoryBacking supported='yes'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <enum name='sourceType'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <value>file</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <value>anonymous</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <value>memfd</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    </enum>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:  </memoryBacking>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:  <devices>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <disk supported='yes'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <enum name='diskDevice'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>disk</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>cdrom</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>floppy</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>lun</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </enum>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <enum name='bus'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>ide</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>fdc</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>scsi</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>virtio</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>usb</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>sata</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </enum>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <enum name='model'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>virtio</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>virtio-transitional</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>virtio-non-transitional</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </enum>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    </disk>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <graphics supported='yes'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <enum name='type'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>vnc</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>egl-headless</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>dbus</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </enum>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    </graphics>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <video supported='yes'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <enum name='modelType'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>vga</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>cirrus</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>virtio</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>none</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>bochs</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>ramfb</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </enum>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    </video>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <hostdev supported='yes'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <enum name='mode'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>subsystem</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </enum>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <enum name='startupPolicy'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>default</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>mandatory</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>requisite</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>optional</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </enum>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <enum name='subsysType'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>usb</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>pci</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>scsi</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </enum>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <enum name='capsType'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <enum name='pciBackend'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    </hostdev>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <rng supported='yes'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <enum name='model'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>virtio</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>virtio-transitional</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>virtio-non-transitional</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </enum>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <enum name='backendModel'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>random</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>egd</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>builtin</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </enum>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    </rng>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <filesystem supported='yes'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <enum name='driverType'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>path</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>handle</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>virtiofs</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </enum>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    </filesystem>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <tpm supported='yes'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <enum name='model'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>tpm-tis</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>tpm-crb</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </enum>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <enum name='backendModel'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>emulator</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>external</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </enum>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <enum name='backendVersion'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>2.0</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </enum>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    </tpm>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <redirdev supported='yes'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <enum name='bus'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>usb</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </enum>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    </redirdev>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <channel supported='yes'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <enum name='type'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>pty</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>unix</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </enum>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    </channel>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <crypto supported='yes'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <enum name='model'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <enum name='type'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>qemu</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </enum>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <enum name='backendModel'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>builtin</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </enum>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    </crypto>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <interface supported='yes'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <enum name='backendType'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>default</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>passt</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </enum>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    </interface>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <panic supported='yes'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <enum name='model'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>isa</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>hyperv</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </enum>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    </panic>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <console supported='yes'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <enum name='type'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>null</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>vc</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>pty</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>dev</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>file</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>pipe</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>stdio</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>udp</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>tcp</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>unix</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>qemu-vdagent</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>dbus</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </enum>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    </console>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:  </devices>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:  <features>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <gic supported='no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <vmcoreinfo supported='yes'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <genid supported='yes'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <backingStoreInput supported='yes'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <backup supported='yes'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <async-teardown supported='yes'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <ps2 supported='yes'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <sev supported='no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <sgx supported='no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <hyperv supported='yes'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <enum name='features'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>relaxed</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>vapic</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>spinlocks</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>vpindex</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>runtime</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>synic</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>stimer</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>reset</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>vendor_id</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>frequencies</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>reenlightenment</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>tlbflush</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>ipi</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>avic</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>emsr_bitmap</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>xmm_input</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </enum>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <defaults>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <spinlocks>4095</spinlocks>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <stimer_direct>on</stimer_direct>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <tlbflush_direct>on</tlbflush_direct>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <tlbflush_extended>on</tlbflush_extended>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <vendor_id>Linux KVM Hv</vendor_id>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </defaults>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    </hyperv>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <launchSecurity supported='yes'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <enum name='sectype'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>tdx</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </enum>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    </launchSecurity>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:  </features>
Dec  1 04:07:06 np0005540697 nova_compute[188509]: </domainCapabilities>
Dec  1 04:07:06 np0005540697 nova_compute[188509]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Dec  1 04:07:06 np0005540697 nova_compute[188509]: 2025-12-01 09:07:06.474 188522 DEBUG nova.virt.libvirt.host [None req-a8567ccf-cd0c-4a0c-b382-dadb486a7f53 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Dec  1 04:07:06 np0005540697 nova_compute[188509]: <domainCapabilities>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:  <path>/usr/libexec/qemu-kvm</path>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:  <domain>kvm</domain>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:  <machine>pc-q35-rhel9.8.0</machine>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:  <arch>i686</arch>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:  <vcpu max='4096'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:  <iothreads supported='yes'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:  <os supported='yes'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <enum name='firmware'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <loader supported='yes'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <enum name='type'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>rom</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>pflash</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </enum>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <enum name='readonly'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>yes</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>no</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </enum>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <enum name='secure'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>no</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </enum>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    </loader>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:  </os>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:  <cpu>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <mode name='host-passthrough' supported='yes'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <enum name='hostPassthroughMigratable'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>on</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>off</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </enum>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    </mode>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <mode name='maximum' supported='yes'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <enum name='maximumMigratable'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>on</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>off</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </enum>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    </mode>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <mode name='host-model' supported='yes'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model fallback='forbid'>EPYC-Rome</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <vendor>AMD</vendor>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <maxphysaddr mode='passthrough' limit='40'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <feature policy='require' name='x2apic'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <feature policy='require' name='tsc-deadline'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <feature policy='require' name='hypervisor'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <feature policy='require' name='tsc_adjust'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <feature policy='require' name='spec-ctrl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <feature policy='require' name='stibp'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <feature policy='require' name='ssbd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <feature policy='require' name='cmp_legacy'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <feature policy='require' name='overflow-recov'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <feature policy='require' name='succor'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <feature policy='require' name='ibrs'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <feature policy='require' name='amd-ssbd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <feature policy='require' name='virt-ssbd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <feature policy='require' name='lbrv'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <feature policy='require' name='tsc-scale'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <feature policy='require' name='vmcb-clean'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <feature policy='require' name='flushbyasid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <feature policy='require' name='pause-filter'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <feature policy='require' name='pfthreshold'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <feature policy='require' name='svme-addr-chk'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <feature policy='require' name='lfence-always-serializing'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <feature policy='disable' name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    </mode>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <mode name='custom' supported='yes'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Broadwell'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Broadwell-IBRS'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Broadwell-noTSX'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Broadwell-noTSX-IBRS'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Broadwell-v1'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Broadwell-v2'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Broadwell-v3'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Broadwell-v4'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Cascadelake-Server'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Cascadelake-Server-noTSX'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ibrs-all'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Cascadelake-Server-v1'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Cascadelake-Server-v2'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ibrs-all'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Cascadelake-Server-v3'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ibrs-all'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Cascadelake-Server-v4'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ibrs-all'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Cascadelake-Server-v5'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ibrs-all'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Cooperlake'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-bf16'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ibrs-all'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='taa-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Cooperlake-v1'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-bf16'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ibrs-all'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='taa-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Cooperlake-v2'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-bf16'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ibrs-all'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='taa-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Denverton'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='mpx'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Denverton-v1'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='mpx'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Denverton-v2'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Denverton-v3'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Dhyana-v2'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='EPYC-Genoa'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='amd-psfd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='auto-ibrs'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-bf16'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bitalg'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512ifma'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='gfni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='la57'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='no-nested-data-bp'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='null-sel-clr-base'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='stibp-always-on'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vaes'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='EPYC-Genoa-v1'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='amd-psfd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='auto-ibrs'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-bf16'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bitalg'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512ifma'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='gfni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='la57'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='no-nested-data-bp'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='null-sel-clr-base'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='stibp-always-on'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vaes'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='EPYC-Milan'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='EPYC-Milan-v1'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='EPYC-Milan-v2'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='amd-psfd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='no-nested-data-bp'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='null-sel-clr-base'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='stibp-always-on'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vaes'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='EPYC-Rome'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='EPYC-Rome-v1'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='EPYC-Rome-v2'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='EPYC-Rome-v3'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='EPYC-v3'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='EPYC-v4'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='GraniteRapids'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='amx-bf16'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='amx-fp16'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='amx-int8'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='amx-tile'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx-vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-bf16'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-fp16'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bitalg'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512ifma'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='bus-lock-detect'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fbsdp-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrc'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrs'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fzrm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='gfni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ibrs-all'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='la57'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='mcdt-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pbrsb-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='prefetchiti'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='psdp-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='sbdr-ssdp-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='serialize'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='taa-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='tsx-ldtrk'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vaes'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xfd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='GraniteRapids-v1'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='amx-bf16'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='amx-fp16'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='amx-int8'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='amx-tile'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx-vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-bf16'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-fp16'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bitalg'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512ifma'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='bus-lock-detect'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fbsdp-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrc'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrs'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fzrm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='gfni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ibrs-all'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='la57'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='mcdt-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pbrsb-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='prefetchiti'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='psdp-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='sbdr-ssdp-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='serialize'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='taa-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='tsx-ldtrk'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vaes'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xfd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='GraniteRapids-v2'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='amx-bf16'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='amx-fp16'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='amx-int8'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='amx-tile'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx-vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx10'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx10-128'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx10-256'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx10-512'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-bf16'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-fp16'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bitalg'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512ifma'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='bus-lock-detect'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='cldemote'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fbsdp-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrc'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrs'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fzrm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='gfni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ibrs-all'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='la57'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='mcdt-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='movdir64b'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='movdiri'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pbrsb-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='prefetchiti'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='psdp-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='sbdr-ssdp-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='serialize'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ss'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='taa-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='tsx-ldtrk'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vaes'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xfd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Haswell'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Haswell-IBRS'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Haswell-noTSX'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Haswell-noTSX-IBRS'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Haswell-v1'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Haswell-v2'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Haswell-v3'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Haswell-v4'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Icelake-Server'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bitalg'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='gfni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='la57'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vaes'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Icelake-Server-noTSX'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bitalg'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='gfni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='la57'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vaes'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Icelake-Server-v1'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bitalg'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='gfni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='la57'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vaes'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Icelake-Server-v2'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bitalg'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='gfni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='la57'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vaes'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Icelake-Server-v3'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bitalg'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='gfni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ibrs-all'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='la57'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='taa-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vaes'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Icelake-Server-v4'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bitalg'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512ifma'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='gfni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ibrs-all'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='la57'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='taa-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vaes'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Icelake-Server-v5'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bitalg'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512ifma'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='gfni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ibrs-all'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='la57'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='taa-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vaes'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Icelake-Server-v6'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bitalg'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512ifma'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='gfni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ibrs-all'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='la57'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='taa-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vaes'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Icelake-Server-v7'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bitalg'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512ifma'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='gfni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ibrs-all'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='la57'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='taa-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vaes'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='IvyBridge'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='IvyBridge-IBRS'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='IvyBridge-v1'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='IvyBridge-v2'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='KnightsMill'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-4fmaps'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-4vnniw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512er'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512pf'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ss'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='KnightsMill-v1'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-4fmaps'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-4vnniw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512er'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512pf'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ss'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Opteron_G4'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fma4'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xop'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Opteron_G4-v1'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fma4'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xop'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Opteron_G5'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fma4'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='tbm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xop'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Opteron_G5-v1'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fma4'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='tbm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xop'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='SapphireRapids'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='amx-bf16'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='amx-int8'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='amx-tile'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx-vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-bf16'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-fp16'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bitalg'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512ifma'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='bus-lock-detect'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrc'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrs'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fzrm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='gfni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ibrs-all'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='la57'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='serialize'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='taa-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='tsx-ldtrk'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vaes'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xfd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='SapphireRapids-v1'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='amx-bf16'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='amx-int8'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='amx-tile'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx-vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-bf16'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-fp16'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bitalg'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512ifma'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='bus-lock-detect'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrc'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrs'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fzrm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='gfni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ibrs-all'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='la57'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='serialize'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='taa-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='tsx-ldtrk'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vaes'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xfd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='SapphireRapids-v2'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='amx-bf16'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='amx-int8'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='amx-tile'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx-vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-bf16'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-fp16'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bitalg'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512ifma'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='bus-lock-detect'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fbsdp-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrc'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrs'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fzrm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='gfni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ibrs-all'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='la57'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='psdp-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='sbdr-ssdp-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='serialize'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='taa-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='tsx-ldtrk'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vaes'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xfd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='SapphireRapids-v3'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='amx-bf16'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='amx-int8'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='amx-tile'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx-vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-bf16'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-fp16'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bitalg'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512ifma'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='bus-lock-detect'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='cldemote'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fbsdp-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrc'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrs'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fzrm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='gfni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ibrs-all'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='la57'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='movdir64b'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='movdiri'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='psdp-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='sbdr-ssdp-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='serialize'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ss'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='taa-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='tsx-ldtrk'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vaes'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xfd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='SierraForest'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx-ifma'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx-ne-convert'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx-vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx-vnni-int8'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='bus-lock-detect'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='cmpccxadd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fbsdp-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrs'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='gfni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ibrs-all'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='mcdt-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pbrsb-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='psdp-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='sbdr-ssdp-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='serialize'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vaes'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='SierraForest-v1'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx-ifma'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx-ne-convert'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx-vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx-vnni-int8'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='bus-lock-detect'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='cmpccxadd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fbsdp-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrs'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='gfni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ibrs-all'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='mcdt-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pbrsb-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='psdp-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='sbdr-ssdp-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='serialize'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vaes'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Skylake-Client'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Skylake-Client-IBRS'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Skylake-Client-v1'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Skylake-Client-v2'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Skylake-Client-v3'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Skylake-Client-v4'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Skylake-Server'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Skylake-Server-IBRS'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Skylake-Server-v1'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Skylake-Server-v2'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Skylake-Server-v3'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Skylake-Server-v4'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Skylake-Server-v5'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Snowridge'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='cldemote'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='core-capability'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='gfni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='movdir64b'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='movdiri'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='mpx'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='split-lock-detect'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Snowridge-v1'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='cldemote'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='core-capability'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='gfni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='movdir64b'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='movdiri'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='mpx'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='split-lock-detect'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Snowridge-v2'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='cldemote'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='core-capability'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='gfni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='movdir64b'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='movdiri'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='split-lock-detect'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Snowridge-v3'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='cldemote'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='core-capability'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='gfni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='movdir64b'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='movdiri'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='split-lock-detect'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Snowridge-v4'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='cldemote'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='gfni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='movdir64b'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='movdiri'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='athlon'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='3dnow'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='3dnowext'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='athlon-v1'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='3dnow'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='3dnowext'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='core2duo'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ss'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='core2duo-v1'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ss'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='coreduo'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ss'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='coreduo-v1'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ss'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='n270'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ss'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='n270-v1'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ss'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='phenom'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='3dnow'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='3dnowext'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='phenom-v1'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='3dnow'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='3dnowext'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    </mode>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:  </cpu>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:  <memoryBacking supported='yes'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <enum name='sourceType'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <value>file</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <value>anonymous</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <value>memfd</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    </enum>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:  </memoryBacking>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:  <devices>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <disk supported='yes'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <enum name='diskDevice'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>disk</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>cdrom</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>floppy</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>lun</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </enum>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <enum name='bus'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>fdc</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>scsi</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>virtio</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>usb</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>sata</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </enum>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <enum name='model'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>virtio</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>virtio-transitional</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>virtio-non-transitional</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </enum>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    </disk>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <graphics supported='yes'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <enum name='type'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>vnc</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>egl-headless</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>dbus</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </enum>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    </graphics>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <video supported='yes'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <enum name='modelType'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>vga</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>cirrus</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>virtio</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>none</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>bochs</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>ramfb</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </enum>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    </video>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <hostdev supported='yes'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <enum name='mode'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>subsystem</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </enum>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <enum name='startupPolicy'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>default</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>mandatory</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>requisite</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>optional</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </enum>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <enum name='subsysType'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>usb</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>pci</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>scsi</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </enum>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <enum name='capsType'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <enum name='pciBackend'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    </hostdev>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <rng supported='yes'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <enum name='model'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>virtio</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>virtio-transitional</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>virtio-non-transitional</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </enum>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <enum name='backendModel'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>random</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>egd</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>builtin</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </enum>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    </rng>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <filesystem supported='yes'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <enum name='driverType'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>path</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>handle</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>virtiofs</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </enum>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    </filesystem>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <tpm supported='yes'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <enum name='model'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>tpm-tis</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>tpm-crb</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </enum>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <enum name='backendModel'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>emulator</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>external</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </enum>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <enum name='backendVersion'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>2.0</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </enum>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    </tpm>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <redirdev supported='yes'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <enum name='bus'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>usb</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </enum>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    </redirdev>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <channel supported='yes'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <enum name='type'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>pty</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>unix</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </enum>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    </channel>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <crypto supported='yes'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <enum name='model'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <enum name='type'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>qemu</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </enum>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <enum name='backendModel'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>builtin</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </enum>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    </crypto>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <interface supported='yes'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <enum name='backendType'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>default</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>passt</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </enum>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    </interface>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <panic supported='yes'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <enum name='model'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>isa</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>hyperv</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </enum>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    </panic>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <console supported='yes'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <enum name='type'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>null</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>vc</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>pty</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>dev</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>file</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>pipe</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>stdio</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>udp</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>tcp</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>unix</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>qemu-vdagent</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>dbus</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </enum>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    </console>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:  </devices>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:  <features>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <gic supported='no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <vmcoreinfo supported='yes'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <genid supported='yes'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <backingStoreInput supported='yes'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <backup supported='yes'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <async-teardown supported='yes'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <ps2 supported='yes'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <sev supported='no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <sgx supported='no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <hyperv supported='yes'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <enum name='features'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>relaxed</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>vapic</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>spinlocks</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>vpindex</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>runtime</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>synic</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>stimer</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>reset</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>vendor_id</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>frequencies</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>reenlightenment</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>tlbflush</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>ipi</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>avic</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>emsr_bitmap</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>xmm_input</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </enum>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <defaults>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <spinlocks>4095</spinlocks>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <stimer_direct>on</stimer_direct>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <tlbflush_direct>on</tlbflush_direct>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <tlbflush_extended>on</tlbflush_extended>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <vendor_id>Linux KVM Hv</vendor_id>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </defaults>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    </hyperv>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <launchSecurity supported='yes'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <enum name='sectype'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>tdx</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </enum>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    </launchSecurity>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:  </features>
Dec  1 04:07:06 np0005540697 nova_compute[188509]: </domainCapabilities>
Dec  1 04:07:06 np0005540697 nova_compute[188509]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Dec  1 04:07:06 np0005540697 nova_compute[188509]: 2025-12-01 09:07:06.503 188522 DEBUG nova.virt.libvirt.host [None req-a8567ccf-cd0c-4a0c-b382-dadb486a7f53 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Dec  1 04:07:06 np0005540697 nova_compute[188509]: 2025-12-01 09:07:06.509 188522 DEBUG nova.virt.libvirt.host [None req-a8567ccf-cd0c-4a0c-b382-dadb486a7f53 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Dec  1 04:07:06 np0005540697 nova_compute[188509]: <domainCapabilities>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:  <path>/usr/libexec/qemu-kvm</path>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:  <domain>kvm</domain>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:  <machine>pc-i440fx-rhel7.6.0</machine>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:  <arch>x86_64</arch>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:  <vcpu max='240'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:  <iothreads supported='yes'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:  <os supported='yes'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <enum name='firmware'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <loader supported='yes'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <enum name='type'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>rom</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>pflash</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </enum>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <enum name='readonly'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>yes</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>no</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </enum>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <enum name='secure'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>no</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </enum>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    </loader>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:  </os>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:  <cpu>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <mode name='host-passthrough' supported='yes'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <enum name='hostPassthroughMigratable'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>on</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>off</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </enum>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    </mode>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <mode name='maximum' supported='yes'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <enum name='maximumMigratable'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>on</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>off</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </enum>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    </mode>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <mode name='host-model' supported='yes'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model fallback='forbid'>EPYC-Rome</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <vendor>AMD</vendor>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <maxphysaddr mode='passthrough' limit='40'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <feature policy='require' name='x2apic'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <feature policy='require' name='tsc-deadline'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <feature policy='require' name='hypervisor'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <feature policy='require' name='tsc_adjust'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <feature policy='require' name='spec-ctrl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <feature policy='require' name='stibp'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <feature policy='require' name='ssbd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <feature policy='require' name='cmp_legacy'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <feature policy='require' name='overflow-recov'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <feature policy='require' name='succor'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <feature policy='require' name='ibrs'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <feature policy='require' name='amd-ssbd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <feature policy='require' name='virt-ssbd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <feature policy='require' name='lbrv'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <feature policy='require' name='tsc-scale'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <feature policy='require' name='vmcb-clean'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <feature policy='require' name='flushbyasid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <feature policy='require' name='pause-filter'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <feature policy='require' name='pfthreshold'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <feature policy='require' name='svme-addr-chk'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <feature policy='require' name='lfence-always-serializing'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <feature policy='disable' name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    </mode>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <mode name='custom' supported='yes'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Broadwell'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Broadwell-IBRS'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Broadwell-noTSX'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Broadwell-noTSX-IBRS'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Broadwell-v1'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Broadwell-v2'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Broadwell-v3'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Broadwell-v4'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Cascadelake-Server'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Cascadelake-Server-noTSX'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ibrs-all'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Cascadelake-Server-v1'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Cascadelake-Server-v2'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ibrs-all'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Cascadelake-Server-v3'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ibrs-all'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Cascadelake-Server-v4'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ibrs-all'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Cascadelake-Server-v5'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ibrs-all'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Cooperlake'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-bf16'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ibrs-all'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='taa-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Cooperlake-v1'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-bf16'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ibrs-all'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='taa-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Cooperlake-v2'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-bf16'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ibrs-all'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='taa-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Denverton'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='mpx'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Denverton-v1'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='mpx'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Denverton-v2'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Denverton-v3'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Dhyana-v2'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='EPYC-Genoa'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='amd-psfd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='auto-ibrs'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-bf16'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bitalg'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512ifma'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='gfni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='la57'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='no-nested-data-bp'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='null-sel-clr-base'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='stibp-always-on'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vaes'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='EPYC-Genoa-v1'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='amd-psfd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='auto-ibrs'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-bf16'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bitalg'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512ifma'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='gfni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='la57'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='no-nested-data-bp'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='null-sel-clr-base'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='stibp-always-on'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vaes'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='EPYC-Milan'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='EPYC-Milan-v1'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='EPYC-Milan-v2'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='amd-psfd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='no-nested-data-bp'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='null-sel-clr-base'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='stibp-always-on'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vaes'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='EPYC-Rome'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='EPYC-Rome-v1'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='EPYC-Rome-v2'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='EPYC-Rome-v3'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='EPYC-v3'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='EPYC-v4'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='GraniteRapids'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='amx-bf16'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='amx-fp16'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='amx-int8'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='amx-tile'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx-vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-bf16'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-fp16'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bitalg'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512ifma'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='bus-lock-detect'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fbsdp-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrc'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrs'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fzrm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='gfni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ibrs-all'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='la57'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='mcdt-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pbrsb-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='prefetchiti'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='psdp-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='sbdr-ssdp-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='serialize'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='taa-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='tsx-ldtrk'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vaes'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xfd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='GraniteRapids-v1'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='amx-bf16'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='amx-fp16'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='amx-int8'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='amx-tile'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx-vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-bf16'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-fp16'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bitalg'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512ifma'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='bus-lock-detect'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fbsdp-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrc'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrs'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fzrm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='gfni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ibrs-all'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='la57'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='mcdt-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pbrsb-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='prefetchiti'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='psdp-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='sbdr-ssdp-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='serialize'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='taa-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='tsx-ldtrk'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vaes'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xfd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='GraniteRapids-v2'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='amx-bf16'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='amx-fp16'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='amx-int8'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='amx-tile'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx-vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx10'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx10-128'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx10-256'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx10-512'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-bf16'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-fp16'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bitalg'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512ifma'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='bus-lock-detect'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='cldemote'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fbsdp-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrc'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrs'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fzrm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='gfni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ibrs-all'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='la57'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='mcdt-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='movdir64b'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='movdiri'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pbrsb-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='prefetchiti'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='psdp-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='sbdr-ssdp-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='serialize'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ss'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='taa-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='tsx-ldtrk'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vaes'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xfd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Haswell'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Haswell-IBRS'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Haswell-noTSX'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Haswell-noTSX-IBRS'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Haswell-v1'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Haswell-v2'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Haswell-v3'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 python3.9[189425]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Haswell-v4'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Icelake-Server'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bitalg'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='gfni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='la57'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vaes'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Icelake-Server-noTSX'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bitalg'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='gfni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='la57'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vaes'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Icelake-Server-v1'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bitalg'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='gfni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='la57'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vaes'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Icelake-Server-v2'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bitalg'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='gfni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='la57'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vaes'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Icelake-Server-v3'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bitalg'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='gfni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ibrs-all'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='la57'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='taa-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vaes'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Icelake-Server-v4'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bitalg'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512ifma'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='gfni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ibrs-all'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='la57'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='taa-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vaes'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Icelake-Server-v5'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bitalg'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512ifma'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='gfni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ibrs-all'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='la57'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='taa-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vaes'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Icelake-Server-v6'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bitalg'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512ifma'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='gfni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ibrs-all'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='la57'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='taa-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vaes'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Icelake-Server-v7'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bitalg'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512ifma'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='gfni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ibrs-all'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='la57'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='taa-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vaes'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='IvyBridge'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='IvyBridge-IBRS'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='IvyBridge-v1'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='IvyBridge-v2'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='KnightsMill'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-4fmaps'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-4vnniw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512er'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512pf'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ss'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='KnightsMill-v1'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-4fmaps'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-4vnniw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512er'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512pf'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ss'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Opteron_G4'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fma4'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xop'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Opteron_G4-v1'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fma4'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xop'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Opteron_G5'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fma4'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='tbm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xop'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Opteron_G5-v1'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fma4'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='tbm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xop'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='SapphireRapids'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='amx-bf16'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='amx-int8'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='amx-tile'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx-vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-bf16'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-fp16'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bitalg'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512ifma'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='bus-lock-detect'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrc'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrs'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fzrm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='gfni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ibrs-all'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='la57'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='serialize'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='taa-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='tsx-ldtrk'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vaes'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xfd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='SapphireRapids-v1'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='amx-bf16'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='amx-int8'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='amx-tile'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx-vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-bf16'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-fp16'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bitalg'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512ifma'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='bus-lock-detect'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrc'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrs'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fzrm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='gfni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ibrs-all'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='la57'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='serialize'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='taa-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='tsx-ldtrk'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vaes'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xfd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='SapphireRapids-v2'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='amx-bf16'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='amx-int8'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='amx-tile'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx-vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-bf16'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-fp16'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bitalg'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512ifma'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='bus-lock-detect'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fbsdp-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrc'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrs'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fzrm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='gfni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ibrs-all'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='la57'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='psdp-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='sbdr-ssdp-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='serialize'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='taa-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='tsx-ldtrk'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vaes'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xfd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='SapphireRapids-v3'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='amx-bf16'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='amx-int8'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='amx-tile'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx-vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-bf16'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-fp16'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bitalg'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512ifma'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='bus-lock-detect'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='cldemote'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fbsdp-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrc'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrs'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fzrm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='gfni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ibrs-all'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='la57'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='movdir64b'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='movdiri'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='psdp-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='sbdr-ssdp-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='serialize'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ss'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='taa-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='tsx-ldtrk'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vaes'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xfd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='SierraForest'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx-ifma'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx-ne-convert'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx-vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx-vnni-int8'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='bus-lock-detect'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='cmpccxadd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fbsdp-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrs'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='gfni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ibrs-all'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='mcdt-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pbrsb-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='psdp-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='sbdr-ssdp-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='serialize'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vaes'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='SierraForest-v1'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx-ifma'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx-ne-convert'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx-vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx-vnni-int8'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='bus-lock-detect'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='cmpccxadd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fbsdp-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrs'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='gfni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ibrs-all'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='mcdt-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pbrsb-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='psdp-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='sbdr-ssdp-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='serialize'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vaes'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Skylake-Client'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Skylake-Client-IBRS'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Skylake-Client-v1'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Skylake-Client-v2'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Skylake-Client-v3'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Skylake-Client-v4'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Skylake-Server'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Skylake-Server-IBRS'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Skylake-Server-v1'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Skylake-Server-v2'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Skylake-Server-v3'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Skylake-Server-v4'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Skylake-Server-v5'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Snowridge'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='cldemote'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='core-capability'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='gfni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='movdir64b'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='movdiri'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='mpx'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='split-lock-detect'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Snowridge-v1'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='cldemote'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='core-capability'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='gfni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='movdir64b'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='movdiri'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='mpx'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='split-lock-detect'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Snowridge-v2'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='cldemote'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='core-capability'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='gfni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='movdir64b'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='movdiri'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='split-lock-detect'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Snowridge-v3'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='cldemote'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='core-capability'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='gfni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='movdir64b'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='movdiri'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='split-lock-detect'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Snowridge-v4'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='cldemote'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='gfni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='movdir64b'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='movdiri'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='athlon'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='3dnow'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='3dnowext'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='athlon-v1'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='3dnow'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='3dnowext'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='core2duo'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ss'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='core2duo-v1'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ss'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='coreduo'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ss'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='coreduo-v1'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ss'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='n270'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ss'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='n270-v1'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ss'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='phenom'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='3dnow'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='3dnowext'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='phenom-v1'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='3dnow'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='3dnowext'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    </mode>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:  </cpu>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:  <memoryBacking supported='yes'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <enum name='sourceType'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <value>file</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <value>anonymous</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <value>memfd</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    </enum>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:  </memoryBacking>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:  <devices>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <disk supported='yes'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <enum name='diskDevice'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>disk</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>cdrom</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>floppy</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>lun</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </enum>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <enum name='bus'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>ide</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>fdc</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>scsi</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>virtio</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>usb</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>sata</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </enum>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <enum name='model'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>virtio</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>virtio-transitional</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>virtio-non-transitional</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </enum>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    </disk>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <graphics supported='yes'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <enum name='type'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>vnc</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>egl-headless</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>dbus</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </enum>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    </graphics>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <video supported='yes'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <enum name='modelType'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>vga</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>cirrus</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>virtio</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>none</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>bochs</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>ramfb</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </enum>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    </video>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <hostdev supported='yes'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <enum name='mode'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>subsystem</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </enum>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <enum name='startupPolicy'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>default</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>mandatory</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>requisite</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>optional</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </enum>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <enum name='subsysType'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>usb</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>pci</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>scsi</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </enum>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <enum name='capsType'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <enum name='pciBackend'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    </hostdev>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <rng supported='yes'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <enum name='model'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>virtio</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>virtio-transitional</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>virtio-non-transitional</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </enum>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <enum name='backendModel'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>random</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>egd</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>builtin</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </enum>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    </rng>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <filesystem supported='yes'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <enum name='driverType'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>path</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>handle</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>virtiofs</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </enum>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    </filesystem>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <tpm supported='yes'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <enum name='model'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>tpm-tis</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>tpm-crb</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </enum>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <enum name='backendModel'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>emulator</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>external</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </enum>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <enum name='backendVersion'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>2.0</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </enum>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    </tpm>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <redirdev supported='yes'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <enum name='bus'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>usb</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </enum>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    </redirdev>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <channel supported='yes'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <enum name='type'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>pty</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>unix</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </enum>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    </channel>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <crypto supported='yes'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <enum name='model'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <enum name='type'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>qemu</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </enum>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <enum name='backendModel'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>builtin</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </enum>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    </crypto>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <interface supported='yes'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <enum name='backendType'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>default</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>passt</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </enum>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    </interface>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <panic supported='yes'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <enum name='model'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>isa</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>hyperv</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </enum>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    </panic>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <console supported='yes'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <enum name='type'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>null</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>vc</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>pty</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>dev</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>file</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>pipe</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>stdio</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>udp</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>tcp</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>unix</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>qemu-vdagent</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>dbus</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </enum>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    </console>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:  </devices>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:  <features>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <gic supported='no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <vmcoreinfo supported='yes'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <genid supported='yes'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <backingStoreInput supported='yes'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <backup supported='yes'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <async-teardown supported='yes'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <ps2 supported='yes'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <sev supported='no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <sgx supported='no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <hyperv supported='yes'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <enum name='features'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>relaxed</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>vapic</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>spinlocks</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>vpindex</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>runtime</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>synic</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>stimer</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>reset</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>vendor_id</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>frequencies</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>reenlightenment</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>tlbflush</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>ipi</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>avic</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>emsr_bitmap</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>xmm_input</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </enum>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <defaults>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <spinlocks>4095</spinlocks>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <stimer_direct>on</stimer_direct>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <tlbflush_direct>on</tlbflush_direct>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <tlbflush_extended>on</tlbflush_extended>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <vendor_id>Linux KVM Hv</vendor_id>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </defaults>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    </hyperv>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <launchSecurity supported='yes'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <enum name='sectype'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>tdx</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </enum>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    </launchSecurity>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:  </features>
Dec  1 04:07:06 np0005540697 nova_compute[188509]: </domainCapabilities>
Dec  1 04:07:06 np0005540697 nova_compute[188509]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Dec  1 04:07:06 np0005540697 nova_compute[188509]: 2025-12-01 09:07:06.566 188522 DEBUG nova.virt.libvirt.host [None req-a8567ccf-cd0c-4a0c-b382-dadb486a7f53 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Dec  1 04:07:06 np0005540697 nova_compute[188509]: <domainCapabilities>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:  <path>/usr/libexec/qemu-kvm</path>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:  <domain>kvm</domain>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:  <machine>pc-q35-rhel9.8.0</machine>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:  <arch>x86_64</arch>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:  <vcpu max='4096'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:  <iothreads supported='yes'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:  <os supported='yes'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <enum name='firmware'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <value>efi</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    </enum>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <loader supported='yes'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <enum name='type'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>rom</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>pflash</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </enum>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <enum name='readonly'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>yes</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>no</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </enum>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <enum name='secure'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>yes</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>no</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </enum>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    </loader>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:  </os>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:  <cpu>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <mode name='host-passthrough' supported='yes'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <enum name='hostPassthroughMigratable'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>on</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>off</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </enum>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    </mode>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <mode name='maximum' supported='yes'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <enum name='maximumMigratable'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>on</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>off</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </enum>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    </mode>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <mode name='host-model' supported='yes'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model fallback='forbid'>EPYC-Rome</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <vendor>AMD</vendor>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <maxphysaddr mode='passthrough' limit='40'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <feature policy='require' name='x2apic'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <feature policy='require' name='tsc-deadline'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <feature policy='require' name='hypervisor'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <feature policy='require' name='tsc_adjust'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <feature policy='require' name='spec-ctrl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <feature policy='require' name='stibp'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <feature policy='require' name='ssbd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <feature policy='require' name='cmp_legacy'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <feature policy='require' name='overflow-recov'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <feature policy='require' name='succor'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <feature policy='require' name='ibrs'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <feature policy='require' name='amd-ssbd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <feature policy='require' name='virt-ssbd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <feature policy='require' name='lbrv'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <feature policy='require' name='tsc-scale'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <feature policy='require' name='vmcb-clean'/>
Dec  1 04:07:06 np0005540697 systemd[1]: Stopping nova_compute container...
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <feature policy='require' name='flushbyasid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <feature policy='require' name='pause-filter'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <feature policy='require' name='pfthreshold'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <feature policy='require' name='svme-addr-chk'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <feature policy='require' name='lfence-always-serializing'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <feature policy='disable' name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    </mode>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <mode name='custom' supported='yes'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Broadwell'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Broadwell-IBRS'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Broadwell-noTSX'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Broadwell-noTSX-IBRS'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Broadwell-v1'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Broadwell-v2'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Broadwell-v3'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Broadwell-v4'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Cascadelake-Server'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Cascadelake-Server-noTSX'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ibrs-all'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Cascadelake-Server-v1'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Cascadelake-Server-v2'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ibrs-all'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Cascadelake-Server-v3'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ibrs-all'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Cascadelake-Server-v4'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ibrs-all'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Cascadelake-Server-v5'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ibrs-all'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Cooperlake'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-bf16'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ibrs-all'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='taa-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Cooperlake-v1'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-bf16'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ibrs-all'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='taa-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Cooperlake-v2'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-bf16'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ibrs-all'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='taa-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Denverton'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='mpx'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Denverton-v1'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='mpx'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Denverton-v2'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Denverton-v3'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Dhyana-v2'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='EPYC-Genoa'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='amd-psfd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='auto-ibrs'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-bf16'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bitalg'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512ifma'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='gfni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='la57'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='no-nested-data-bp'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='null-sel-clr-base'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='stibp-always-on'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vaes'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='EPYC-Genoa-v1'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='amd-psfd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='auto-ibrs'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-bf16'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bitalg'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512ifma'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='gfni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='la57'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='no-nested-data-bp'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='null-sel-clr-base'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='stibp-always-on'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vaes'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='EPYC-Milan'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='EPYC-Milan-v1'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='EPYC-Milan-v2'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='amd-psfd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='no-nested-data-bp'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='null-sel-clr-base'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='stibp-always-on'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vaes'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='EPYC-Rome'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='EPYC-Rome-v1'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='EPYC-Rome-v2'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='EPYC-Rome-v3'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='EPYC-v3'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='EPYC-v4'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='GraniteRapids'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='amx-bf16'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='amx-fp16'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='amx-int8'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='amx-tile'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx-vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-bf16'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-fp16'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bitalg'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512ifma'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='bus-lock-detect'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fbsdp-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrc'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrs'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fzrm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='gfni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ibrs-all'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='la57'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='mcdt-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pbrsb-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='prefetchiti'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='psdp-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='sbdr-ssdp-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='serialize'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='taa-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='tsx-ldtrk'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vaes'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xfd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='GraniteRapids-v1'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='amx-bf16'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='amx-fp16'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='amx-int8'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='amx-tile'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx-vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-bf16'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-fp16'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bitalg'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512ifma'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='bus-lock-detect'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fbsdp-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrc'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrs'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fzrm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='gfni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ibrs-all'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='la57'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='mcdt-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pbrsb-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='prefetchiti'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='psdp-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='sbdr-ssdp-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='serialize'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='taa-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='tsx-ldtrk'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vaes'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xfd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='GraniteRapids-v2'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='amx-bf16'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='amx-fp16'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='amx-int8'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='amx-tile'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx-vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx10'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx10-128'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx10-256'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx10-512'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-bf16'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-fp16'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bitalg'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512ifma'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='bus-lock-detect'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='cldemote'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fbsdp-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrc'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrs'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fzrm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='gfni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ibrs-all'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='la57'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='mcdt-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='movdir64b'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='movdiri'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pbrsb-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='prefetchiti'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='psdp-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='sbdr-ssdp-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='serialize'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ss'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='taa-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='tsx-ldtrk'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vaes'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xfd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Haswell'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Haswell-IBRS'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Haswell-noTSX'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Haswell-noTSX-IBRS'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Haswell-v1'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Haswell-v2'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Haswell-v3'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Haswell-v4'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Icelake-Server'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bitalg'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='gfni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='la57'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vaes'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Icelake-Server-noTSX'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bitalg'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='gfni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='la57'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vaes'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Icelake-Server-v1'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bitalg'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='gfni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='la57'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vaes'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Icelake-Server-v2'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bitalg'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='gfni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='la57'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vaes'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Icelake-Server-v3'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bitalg'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='gfni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ibrs-all'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='la57'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='taa-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vaes'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Icelake-Server-v4'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bitalg'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512ifma'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='gfni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ibrs-all'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='la57'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='taa-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vaes'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Icelake-Server-v5'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bitalg'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512ifma'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='gfni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ibrs-all'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='la57'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='taa-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vaes'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Icelake-Server-v6'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bitalg'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512ifma'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='gfni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ibrs-all'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='la57'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='taa-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vaes'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Icelake-Server-v7'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bitalg'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512ifma'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='gfni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ibrs-all'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='la57'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='taa-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vaes'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='IvyBridge'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='IvyBridge-IBRS'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='IvyBridge-v1'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='IvyBridge-v2'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='KnightsMill'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-4fmaps'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-4vnniw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512er'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512pf'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ss'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='KnightsMill-v1'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-4fmaps'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-4vnniw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512er'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512pf'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ss'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Opteron_G4'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fma4'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xop'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Opteron_G4-v1'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fma4'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xop'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Opteron_G5'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fma4'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='tbm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xop'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Opteron_G5-v1'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fma4'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='tbm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xop'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='SapphireRapids'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='amx-bf16'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='amx-int8'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='amx-tile'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx-vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-bf16'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-fp16'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bitalg'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512ifma'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='bus-lock-detect'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrc'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrs'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fzrm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='gfni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ibrs-all'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='la57'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='serialize'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='taa-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='tsx-ldtrk'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vaes'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xfd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='SapphireRapids-v1'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='amx-bf16'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='amx-int8'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='amx-tile'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx-vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-bf16'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-fp16'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bitalg'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512ifma'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='bus-lock-detect'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrc'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrs'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fzrm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='gfni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ibrs-all'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='la57'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='serialize'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='taa-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='tsx-ldtrk'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vaes'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xfd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='SapphireRapids-v2'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='amx-bf16'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='amx-int8'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='amx-tile'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx-vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-bf16'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-fp16'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bitalg'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512ifma'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='bus-lock-detect'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fbsdp-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrc'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrs'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fzrm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='gfni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ibrs-all'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='la57'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='psdp-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='sbdr-ssdp-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='serialize'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='taa-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='tsx-ldtrk'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vaes'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xfd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='SapphireRapids-v3'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='amx-bf16'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='amx-int8'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='amx-tile'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx-vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-bf16'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-fp16'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bitalg'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512ifma'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='bus-lock-detect'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='cldemote'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fbsdp-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrc'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrs'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fzrm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='gfni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ibrs-all'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='la57'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='movdir64b'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='movdiri'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='psdp-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='sbdr-ssdp-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='serialize'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ss'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='taa-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='tsx-ldtrk'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vaes'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xfd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='SierraForest'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx-ifma'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx-ne-convert'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx-vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx-vnni-int8'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='bus-lock-detect'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='cmpccxadd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fbsdp-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrs'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='gfni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ibrs-all'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='mcdt-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pbrsb-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='psdp-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='sbdr-ssdp-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='serialize'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vaes'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='SierraForest-v1'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx-ifma'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx-ne-convert'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx-vnni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx-vnni-int8'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='bus-lock-detect'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='cmpccxadd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fbsdp-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='fsrs'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='gfni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ibrs-all'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='mcdt-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pbrsb-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='psdp-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='sbdr-ssdp-no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='serialize'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vaes'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Skylake-Client'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Skylake-Client-IBRS'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Skylake-Client-v1'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Skylake-Client-v2'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Skylake-Client-v3'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Skylake-Client-v4'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Skylake-Server'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Skylake-Server-IBRS'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Skylake-Server-v1'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Skylake-Server-v2'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='hle'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='rtm'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Skylake-Server-v3'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Skylake-Server-v4'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Skylake-Server-v5'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512bw'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512cd'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512dq'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512f'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='avx512vl'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='invpcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pcid'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='pku'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Snowridge'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='cldemote'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='core-capability'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='gfni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='movdir64b'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='movdiri'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='mpx'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='split-lock-detect'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Snowridge-v1'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='cldemote'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='core-capability'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='gfni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='movdir64b'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='movdiri'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='mpx'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='split-lock-detect'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Snowridge-v2'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='cldemote'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='core-capability'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='gfni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='movdir64b'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='movdiri'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='split-lock-detect'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Snowridge-v3'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='cldemote'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='core-capability'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='gfni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='movdir64b'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='movdiri'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='split-lock-detect'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='Snowridge-v4'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='cldemote'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='erms'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='gfni'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='movdir64b'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='movdiri'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='xsaves'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='athlon'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='3dnow'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='3dnowext'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='athlon-v1'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='3dnow'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='3dnowext'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='core2duo'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ss'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='core2duo-v1'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ss'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='coreduo'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ss'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='coreduo-v1'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ss'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='n270'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ss'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='n270-v1'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='ss'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='phenom'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='3dnow'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='3dnowext'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <blockers model='phenom-v1'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='3dnow'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <feature name='3dnowext'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </blockers>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    </mode>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:  </cpu>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:  <memoryBacking supported='yes'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <enum name='sourceType'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <value>file</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <value>anonymous</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <value>memfd</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    </enum>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:  </memoryBacking>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:  <devices>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <disk supported='yes'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <enum name='diskDevice'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>disk</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>cdrom</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>floppy</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>lun</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </enum>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <enum name='bus'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>fdc</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>scsi</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>virtio</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>usb</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>sata</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </enum>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <enum name='model'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>virtio</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>virtio-transitional</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>virtio-non-transitional</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </enum>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    </disk>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <graphics supported='yes'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <enum name='type'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>vnc</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>egl-headless</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>dbus</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </enum>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    </graphics>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <video supported='yes'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <enum name='modelType'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>vga</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>cirrus</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>virtio</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>none</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>bochs</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>ramfb</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </enum>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    </video>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <hostdev supported='yes'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <enum name='mode'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>subsystem</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </enum>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <enum name='startupPolicy'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>default</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>mandatory</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>requisite</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>optional</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </enum>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <enum name='subsysType'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>usb</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>pci</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>scsi</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </enum>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <enum name='capsType'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <enum name='pciBackend'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    </hostdev>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <rng supported='yes'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <enum name='model'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>virtio</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>virtio-transitional</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>virtio-non-transitional</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </enum>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <enum name='backendModel'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>random</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>egd</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>builtin</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </enum>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    </rng>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <filesystem supported='yes'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <enum name='driverType'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>path</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>handle</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>virtiofs</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </enum>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    </filesystem>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <tpm supported='yes'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <enum name='model'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>tpm-tis</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>tpm-crb</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </enum>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <enum name='backendModel'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>emulator</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>external</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </enum>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <enum name='backendVersion'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>2.0</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </enum>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    </tpm>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <redirdev supported='yes'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <enum name='bus'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>usb</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </enum>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    </redirdev>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <channel supported='yes'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <enum name='type'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>pty</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>unix</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </enum>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    </channel>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <crypto supported='yes'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <enum name='model'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <enum name='type'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>qemu</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </enum>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <enum name='backendModel'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>builtin</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </enum>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    </crypto>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <interface supported='yes'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <enum name='backendType'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>default</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>passt</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </enum>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    </interface>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <panic supported='yes'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <enum name='model'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>isa</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>hyperv</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </enum>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    </panic>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <console supported='yes'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <enum name='type'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>null</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>vc</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>pty</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>dev</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>file</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>pipe</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>stdio</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>udp</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>tcp</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>unix</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>qemu-vdagent</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>dbus</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </enum>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    </console>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:  </devices>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:  <features>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <gic supported='no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <vmcoreinfo supported='yes'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <genid supported='yes'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <backingStoreInput supported='yes'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <backup supported='yes'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <async-teardown supported='yes'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <ps2 supported='yes'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <sev supported='no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <sgx supported='no'/>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <hyperv supported='yes'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <enum name='features'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>relaxed</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>vapic</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>spinlocks</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>vpindex</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>runtime</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>synic</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>stimer</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>reset</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>vendor_id</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>frequencies</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>reenlightenment</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>tlbflush</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>ipi</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>avic</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>emsr_bitmap</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>xmm_input</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </enum>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <defaults>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <spinlocks>4095</spinlocks>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <stimer_direct>on</stimer_direct>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <tlbflush_direct>on</tlbflush_direct>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <tlbflush_extended>on</tlbflush_extended>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <vendor_id>Linux KVM Hv</vendor_id>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </defaults>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    </hyperv>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    <launchSecurity supported='yes'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      <enum name='sectype'>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:        <value>tdx</value>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:      </enum>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:    </launchSecurity>
Dec  1 04:07:06 np0005540697 nova_compute[188509]:  </features>
Dec  1 04:07:06 np0005540697 nova_compute[188509]: </domainCapabilities>
Dec  1 04:07:06 np0005540697 nova_compute[188509]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Dec  1 04:07:06 np0005540697 nova_compute[188509]: 2025-12-01 09:07:06.633 188522 DEBUG nova.virt.libvirt.host [None req-a8567ccf-cd0c-4a0c-b382-dadb486a7f53 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Dec  1 04:07:06 np0005540697 nova_compute[188509]: 2025-12-01 09:07:06.634 188522 DEBUG nova.virt.libvirt.host [None req-a8567ccf-cd0c-4a0c-b382-dadb486a7f53 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Dec  1 04:07:06 np0005540697 nova_compute[188509]: 2025-12-01 09:07:06.634 188522 DEBUG nova.virt.libvirt.host [None req-a8567ccf-cd0c-4a0c-b382-dadb486a7f53 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Dec  1 04:07:06 np0005540697 nova_compute[188509]: 2025-12-01 09:07:06.634 188522 INFO nova.virt.libvirt.host [None req-a8567ccf-cd0c-4a0c-b382-dadb486a7f53 - - - - - -] Secure Boot support detected#033[00m
Dec  1 04:07:06 np0005540697 nova_compute[188509]: 2025-12-01 09:07:06.636 188522 INFO nova.virt.libvirt.driver [None req-a8567ccf-cd0c-4a0c-b382-dadb486a7f53 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Dec  1 04:07:06 np0005540697 nova_compute[188509]: 2025-12-01 09:07:06.636 188522 INFO nova.virt.libvirt.driver [None req-a8567ccf-cd0c-4a0c-b382-dadb486a7f53 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Dec  1 04:07:06 np0005540697 nova_compute[188509]: 2025-12-01 09:07:06.646 188522 DEBUG nova.virt.libvirt.driver [None req-a8567ccf-cd0c-4a0c-b382-dadb486a7f53 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m
Dec  1 04:07:06 np0005540697 nova_compute[188509]: 2025-12-01 09:07:06.680 188522 INFO nova.virt.node [None req-a8567ccf-cd0c-4a0c-b382-dadb486a7f53 - - - - - -] Determined node identity 143c7fe7-af1f-477a-978c-6a994d785d98 from /var/lib/nova/compute_id#033[00m
Dec  1 04:07:06 np0005540697 nova_compute[188509]: 2025-12-01 09:07:06.698 188522 DEBUG oslo_concurrency.lockutils [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 04:07:06 np0005540697 nova_compute[188509]: 2025-12-01 09:07:06.698 188522 DEBUG oslo_concurrency.lockutils [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 04:07:06 np0005540697 nova_compute[188509]: 2025-12-01 09:07:06.699 188522 DEBUG oslo_concurrency.lockutils [None req-1c1ccaaf-8e8f-4681-935f-da1d987b148a - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 04:07:07 np0005540697 virtqemud[189211]: libvirt version: 11.9.0, package: 1.el9 (builder@centos.org, 2025-11-04-09:54:50, )
Dec  1 04:07:07 np0005540697 virtqemud[189211]: hostname: compute-0
Dec  1 04:07:07 np0005540697 virtqemud[189211]: End of file while reading data: Input/output error
Dec  1 04:07:07 np0005540697 systemd[1]: libpod-f8c1b8c93d972d7f632881e088593a05a338d9387e66dc713caf490f08b04912.scope: Deactivated successfully.
Dec  1 04:07:07 np0005540697 systemd[1]: libpod-f8c1b8c93d972d7f632881e088593a05a338d9387e66dc713caf490f08b04912.scope: Consumed 3.249s CPU time.
Dec  1 04:07:07 np0005540697 podman[189433]: 2025-12-01 09:07:07.169683261 +0000 UTC m=+0.513069232 container died f8c1b8c93d972d7f632881e088593a05a338d9387e66dc713caf490f08b04912 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec  1 04:07:07 np0005540697 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f8c1b8c93d972d7f632881e088593a05a338d9387e66dc713caf490f08b04912-userdata-shm.mount: Deactivated successfully.
Dec  1 04:07:07 np0005540697 systemd[1]: var-lib-containers-storage-overlay-c16192c212c63f91de88878f1bbbf6ec6987fab22bbce4ecae4d84e4f5ba8d18-merged.mount: Deactivated successfully.
Dec  1 04:07:07 np0005540697 podman[189433]: 2025-12-01 09:07:07.245905408 +0000 UTC m=+0.589291409 container cleanup f8c1b8c93d972d7f632881e088593a05a338d9387e66dc713caf490f08b04912 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec  1 04:07:07 np0005540697 podman[189433]: nova_compute
Dec  1 04:07:07 np0005540697 podman[189463]: nova_compute
Dec  1 04:07:07 np0005540697 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Dec  1 04:07:07 np0005540697 systemd[1]: Stopped nova_compute container.
Dec  1 04:07:07 np0005540697 systemd[1]: Starting nova_compute container...
Dec  1 04:07:07 np0005540697 systemd[1]: Started libcrun container.
Dec  1 04:07:07 np0005540697 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c16192c212c63f91de88878f1bbbf6ec6987fab22bbce4ecae4d84e4f5ba8d18/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Dec  1 04:07:07 np0005540697 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c16192c212c63f91de88878f1bbbf6ec6987fab22bbce4ecae4d84e4f5ba8d18/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec  1 04:07:07 np0005540697 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c16192c212c63f91de88878f1bbbf6ec6987fab22bbce4ecae4d84e4f5ba8d18/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Dec  1 04:07:07 np0005540697 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c16192c212c63f91de88878f1bbbf6ec6987fab22bbce4ecae4d84e4f5ba8d18/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec  1 04:07:07 np0005540697 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c16192c212c63f91de88878f1bbbf6ec6987fab22bbce4ecae4d84e4f5ba8d18/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec  1 04:07:07 np0005540697 podman[189476]: 2025-12-01 09:07:07.415803675 +0000 UTC m=+0.083943196 container init f8c1b8c93d972d7f632881e088593a05a338d9387e66dc713caf490f08b04912 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=nova_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=edpm)
Dec  1 04:07:07 np0005540697 podman[189476]: 2025-12-01 09:07:07.422642375 +0000 UTC m=+0.090781876 container start f8c1b8c93d972d7f632881e088593a05a338d9387e66dc713caf490f08b04912 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=nova_compute)
Dec  1 04:07:07 np0005540697 podman[189476]: nova_compute
Dec  1 04:07:07 np0005540697 nova_compute[189491]: + sudo -E kolla_set_configs
Dec  1 04:07:07 np0005540697 systemd[1]: Started nova_compute container.
Dec  1 04:07:07 np0005540697 nova_compute[189491]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec  1 04:07:07 np0005540697 nova_compute[189491]: INFO:__main__:Validating config file
Dec  1 04:07:07 np0005540697 nova_compute[189491]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec  1 04:07:07 np0005540697 nova_compute[189491]: INFO:__main__:Copying service configuration files
Dec  1 04:07:07 np0005540697 nova_compute[189491]: INFO:__main__:Deleting /etc/nova/nova.conf
Dec  1 04:07:07 np0005540697 nova_compute[189491]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Dec  1 04:07:07 np0005540697 nova_compute[189491]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Dec  1 04:07:07 np0005540697 nova_compute[189491]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Dec  1 04:07:07 np0005540697 nova_compute[189491]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Dec  1 04:07:07 np0005540697 nova_compute[189491]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Dec  1 04:07:07 np0005540697 nova_compute[189491]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Dec  1 04:07:07 np0005540697 nova_compute[189491]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Dec  1 04:07:07 np0005540697 nova_compute[189491]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Dec  1 04:07:07 np0005540697 nova_compute[189491]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Dec  1 04:07:07 np0005540697 nova_compute[189491]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Dec  1 04:07:07 np0005540697 nova_compute[189491]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Dec  1 04:07:07 np0005540697 nova_compute[189491]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec  1 04:07:07 np0005540697 nova_compute[189491]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec  1 04:07:07 np0005540697 nova_compute[189491]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec  1 04:07:07 np0005540697 nova_compute[189491]: INFO:__main__:Deleting /etc/ceph
Dec  1 04:07:07 np0005540697 nova_compute[189491]: INFO:__main__:Creating directory /etc/ceph
Dec  1 04:07:07 np0005540697 nova_compute[189491]: INFO:__main__:Setting permission for /etc/ceph
Dec  1 04:07:07 np0005540697 nova_compute[189491]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Dec  1 04:07:07 np0005540697 nova_compute[189491]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Dec  1 04:07:07 np0005540697 nova_compute[189491]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec  1 04:07:07 np0005540697 nova_compute[189491]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Dec  1 04:07:07 np0005540697 nova_compute[189491]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Dec  1 04:07:07 np0005540697 nova_compute[189491]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec  1 04:07:07 np0005540697 nova_compute[189491]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Dec  1 04:07:07 np0005540697 nova_compute[189491]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Dec  1 04:07:07 np0005540697 nova_compute[189491]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Dec  1 04:07:07 np0005540697 nova_compute[189491]: INFO:__main__:Writing out command to execute
Dec  1 04:07:07 np0005540697 nova_compute[189491]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Dec  1 04:07:07 np0005540697 nova_compute[189491]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec  1 04:07:07 np0005540697 nova_compute[189491]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec  1 04:07:07 np0005540697 nova_compute[189491]: ++ cat /run_command
Dec  1 04:07:07 np0005540697 nova_compute[189491]: + CMD=nova-compute
Dec  1 04:07:07 np0005540697 nova_compute[189491]: + ARGS=
Dec  1 04:07:07 np0005540697 nova_compute[189491]: + sudo kolla_copy_cacerts
Dec  1 04:07:07 np0005540697 nova_compute[189491]: + [[ ! -n '' ]]
Dec  1 04:07:07 np0005540697 nova_compute[189491]: + . kolla_extend_start
Dec  1 04:07:07 np0005540697 nova_compute[189491]: Running command: 'nova-compute'
Dec  1 04:07:07 np0005540697 nova_compute[189491]: + echo 'Running command: '\''nova-compute'\'''
Dec  1 04:07:07 np0005540697 nova_compute[189491]: + umask 0022
Dec  1 04:07:07 np0005540697 nova_compute[189491]: + exec nova-compute
Dec  1 04:07:08 np0005540697 python3.9[189654]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Dec  1 04:07:08 np0005540697 systemd[1]: Started libpod-conmon-aa487ecc35bce760da2515e5327c27adc3b13249d986942b440a9b03460ab355.scope.
Dec  1 04:07:08 np0005540697 systemd[1]: Started libcrun container.
Dec  1 04:07:08 np0005540697 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73a10aa947be78d1cf3ad71f717a2e910a760dbae6f0f7e39f54c1d11bb2da55/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Dec  1 04:07:08 np0005540697 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73a10aa947be78d1cf3ad71f717a2e910a760dbae6f0f7e39f54c1d11bb2da55/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec  1 04:07:08 np0005540697 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73a10aa947be78d1cf3ad71f717a2e910a760dbae6f0f7e39f54c1d11bb2da55/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Dec  1 04:07:08 np0005540697 podman[189678]: 2025-12-01 09:07:08.670685968 +0000 UTC m=+0.350003825 container init aa487ecc35bce760da2515e5327c27adc3b13249d986942b440a9b03460ab355 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, container_name=nova_compute_init, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125)
Dec  1 04:07:08 np0005540697 podman[189678]: 2025-12-01 09:07:08.679639107 +0000 UTC m=+0.358956954 container start aa487ecc35bce760da2515e5327c27adc3b13249d986942b440a9b03460ab355 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, container_name=nova_compute_init, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, managed_by=edpm_ansible)
Dec  1 04:07:08 np0005540697 python3.9[189654]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Dec  1 04:07:08 np0005540697 nova_compute_init[189700]: INFO:nova_statedir:Applying nova statedir ownership
Dec  1 04:07:08 np0005540697 nova_compute_init[189700]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Dec  1 04:07:08 np0005540697 nova_compute_init[189700]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Dec  1 04:07:08 np0005540697 nova_compute_init[189700]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Dec  1 04:07:08 np0005540697 nova_compute_init[189700]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Dec  1 04:07:08 np0005540697 nova_compute_init[189700]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Dec  1 04:07:08 np0005540697 nova_compute_init[189700]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Dec  1 04:07:08 np0005540697 nova_compute_init[189700]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Dec  1 04:07:08 np0005540697 nova_compute_init[189700]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Dec  1 04:07:08 np0005540697 nova_compute_init[189700]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Dec  1 04:07:08 np0005540697 nova_compute_init[189700]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Dec  1 04:07:08 np0005540697 nova_compute_init[189700]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Dec  1 04:07:08 np0005540697 nova_compute_init[189700]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Dec  1 04:07:08 np0005540697 nova_compute_init[189700]: INFO:nova_statedir:Nova statedir ownership complete
Dec  1 04:07:08 np0005540697 systemd[1]: libpod-aa487ecc35bce760da2515e5327c27adc3b13249d986942b440a9b03460ab355.scope: Deactivated successfully.
Dec  1 04:07:08 np0005540697 podman[189701]: 2025-12-01 09:07:08.761014153 +0000 UTC m=+0.047525499 container died aa487ecc35bce760da2515e5327c27adc3b13249d986942b440a9b03460ab355 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125)
Dec  1 04:07:08 np0005540697 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-aa487ecc35bce760da2515e5327c27adc3b13249d986942b440a9b03460ab355-userdata-shm.mount: Deactivated successfully.
Dec  1 04:07:08 np0005540697 systemd[1]: var-lib-containers-storage-overlay-73a10aa947be78d1cf3ad71f717a2e910a760dbae6f0f7e39f54c1d11bb2da55-merged.mount: Deactivated successfully.
Dec  1 04:07:08 np0005540697 podman[189711]: 2025-12-01 09:07:08.805729064 +0000 UTC m=+0.047497957 container cleanup aa487ecc35bce760da2515e5327c27adc3b13249d986942b440a9b03460ab355 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec  1 04:07:08 np0005540697 systemd[1]: libpod-conmon-aa487ecc35bce760da2515e5327c27adc3b13249d986942b440a9b03460ab355.scope: Deactivated successfully.
Dec  1 04:07:09 np0005540697 systemd[1]: session-23.scope: Deactivated successfully.
Dec  1 04:07:09 np0005540697 systemd[1]: session-23.scope: Consumed 2min 987ms CPU time.
Dec  1 04:07:09 np0005540697 systemd-logind[792]: Session 23 logged out. Waiting for processes to exit.
Dec  1 04:07:09 np0005540697 systemd-logind[792]: Removed session 23.
Dec  1 04:07:09 np0005540697 nova_compute[189491]: 2025-12-01 09:07:09.529 189495 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Dec  1 04:07:09 np0005540697 nova_compute[189491]: 2025-12-01 09:07:09.530 189495 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Dec  1 04:07:09 np0005540697 nova_compute[189491]: 2025-12-01 09:07:09.530 189495 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Dec  1 04:07:09 np0005540697 nova_compute[189491]: 2025-12-01 09:07:09.530 189495 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Dec  1 04:07:09 np0005540697 nova_compute[189491]: 2025-12-01 09:07:09.678 189495 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 04:07:09 np0005540697 nova_compute[189491]: 2025-12-01 09:07:09.702 189495 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.024s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 04:07:09 np0005540697 nova_compute[189491]: 2025-12-01 09:07:09.703 189495 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.156 189495 INFO nova.virt.driver [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.264 189495 INFO nova.compute.provider_config [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.359 189495 DEBUG oslo_concurrency.lockutils [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.360 189495 DEBUG oslo_concurrency.lockutils [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.360 189495 DEBUG oslo_concurrency.lockutils [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.360 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.360 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.360 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.361 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.361 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.361 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.361 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.361 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.361 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.361 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.362 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.362 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.362 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.362 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.362 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.362 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.362 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.363 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.363 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.363 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.363 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.363 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.363 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.363 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.363 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.364 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.364 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.364 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.364 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.364 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.364 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.365 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.365 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.365 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.365 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.365 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.365 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.365 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.366 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.366 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.366 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.366 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.366 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.366 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.366 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.367 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.367 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.367 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.367 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.367 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.367 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.367 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.368 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.368 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.368 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.368 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.368 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.368 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.368 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.368 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.369 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.369 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.369 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.369 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.369 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.369 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.369 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.370 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.370 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.370 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.370 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.370 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.370 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.370 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.371 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.371 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.371 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.371 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.371 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.371 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.371 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.372 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.372 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.372 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.372 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.373 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.373 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.373 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.373 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.374 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.374 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.374 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.374 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.374 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.375 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.375 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.375 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.375 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.376 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.376 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.376 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.376 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.377 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.377 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.377 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.377 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.377 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.378 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.378 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.378 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.378 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.379 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.379 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.379 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.379 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.379 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.380 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.380 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.380 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.380 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.381 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.381 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.381 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.381 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.381 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.382 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.382 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.382 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.382 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.383 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.383 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.383 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.384 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.384 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.384 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.384 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.384 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.385 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.385 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.385 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.385 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.385 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.386 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.386 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.386 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.386 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.387 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.387 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.387 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.387 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.388 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.388 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.388 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.388 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.388 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.389 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.389 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.389 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.389 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.390 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.390 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.390 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.390 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.390 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.391 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.391 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.391 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.391 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.392 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.392 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.392 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.392 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.392 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.393 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.393 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.393 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.393 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.393 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.394 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.394 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.394 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.394 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.395 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.395 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.395 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.395 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.395 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.396 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.396 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.396 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.396 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.397 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.397 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.397 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.397 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.397 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.398 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.398 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.398 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.398 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.398 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.399 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.399 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.399 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.399 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.399 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.400 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.400 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.400 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.400 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.401 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.401 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.401 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.401 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.401 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.402 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.402 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.402 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.402 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.402 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.403 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.403 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.403 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.403 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.403 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.403 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.403 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.404 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.404 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.404 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.404 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.404 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.404 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.404 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.404 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.405 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.405 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.405 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.405 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.405 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.405 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.405 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.406 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.406 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.406 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.406 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.406 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.406 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.406 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.406 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.407 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.407 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.407 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.407 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.407 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.407 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.407 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.408 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.408 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.408 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.408 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.408 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.408 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.408 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.408 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.409 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.409 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.409 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.409 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.409 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.409 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.409 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.409 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.410 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.410 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.410 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.410 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.410 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.410 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.410 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.411 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.411 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.411 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.411 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.411 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.411 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.411 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.412 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.412 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.412 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.412 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.412 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.412 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.412 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.413 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.413 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.413 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.413 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.413 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.413 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.413 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.414 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.414 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.414 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.414 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.414 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.414 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.414 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.415 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.415 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.415 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.415 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.415 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.415 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.415 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.415 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.416 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.416 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.416 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.416 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.416 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.416 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.416 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.417 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.417 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.417 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.417 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.417 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.417 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.417 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.418 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.418 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.418 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.418 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.418 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.418 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.418 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.419 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.419 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.419 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.419 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.419 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.419 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.420 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.420 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.420 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.420 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.420 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.420 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.420 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.421 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.421 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.421 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.421 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.421 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.422 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.422 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.422 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.422 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.422 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.422 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.423 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.423 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.423 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.423 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.423 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.423 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.424 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.424 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.424 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.424 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.424 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.424 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.424 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.425 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.425 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.425 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.425 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.425 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.425 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.425 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.426 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.426 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.426 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.426 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.426 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.427 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.427 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.427 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.427 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.427 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.427 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.427 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.428 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.428 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.428 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.428 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.428 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.428 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.429 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.429 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.429 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.429 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.429 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.430 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.430 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.430 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.430 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.430 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.430 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.430 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.431 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.431 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.431 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.431 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.431 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.431 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.432 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.432 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.432 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.432 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.432 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.432 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.432 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.433 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.433 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.433 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.433 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.433 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.433 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.433 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.434 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.434 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.434 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.434 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.434 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.434 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.435 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.435 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.435 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.435 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.435 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.435 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.435 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.436 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.436 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.436 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.436 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.436 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.436 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.436 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.437 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.437 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.437 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.437 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.437 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.437 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.438 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.438 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.438 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.438 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.438 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.438 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.438 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.439 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.439 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.439 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.439 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.439 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] libvirt.images_rbd_ceph_conf   =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.439 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.440 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.440 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] libvirt.images_rbd_glance_store_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.440 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] libvirt.images_rbd_pool        = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.440 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] libvirt.images_type            = qcow2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.440 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.440 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.440 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.441 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.441 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.441 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.441 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.441 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.441 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.441 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.442 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.442 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.442 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.442 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.442 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.442 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.443 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.443 189495 WARNING oslo_config.cfg [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Dec  1 04:07:10 np0005540697 nova_compute[189491]: live_migration_uri is deprecated for removal in favor of two other options that
Dec  1 04:07:10 np0005540697 nova_compute[189491]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Dec  1 04:07:10 np0005540697 nova_compute[189491]: and ``live_migration_inbound_addr`` respectively.
Dec  1 04:07:10 np0005540697 nova_compute[189491]: ).  Its value may be silently ignored in the future.#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.443 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.443 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.444 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.444 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.444 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.444 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.444 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.444 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.444 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.445 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.445 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.445 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.445 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.445 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.445 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.446 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.446 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.446 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.446 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] libvirt.rbd_secret_uuid        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.446 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] libvirt.rbd_user               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.446 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.447 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.447 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.447 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.447 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.447 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.447 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.447 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.448 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.448 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.448 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.448 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.448 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.449 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.449 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.449 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.449 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.449 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.449 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.450 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.450 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.450 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.450 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.450 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.451 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.451 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.451 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.451 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.451 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.451 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.452 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.452 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.452 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.452 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.452 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.452 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.453 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.453 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.453 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.453 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.453 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.454 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.454 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.454 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.454 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.454 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.454 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.455 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.455 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.455 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.455 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.455 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.455 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.455 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.456 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.456 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.456 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.456 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.456 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.456 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.457 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.457 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.457 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.457 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.457 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.457 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.457 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.458 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.458 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.458 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.458 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.458 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.458 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.459 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.459 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.459 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.459 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.459 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.459 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.459 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.459 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.460 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.460 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.460 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.460 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.460 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.460 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.460 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.461 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.461 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.461 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.461 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.461 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.461 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.462 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.462 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.462 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.462 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.462 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.462 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.462 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.463 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.463 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.463 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.463 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.463 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.463 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.463 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.464 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.464 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.464 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.464 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.464 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.464 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.465 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.465 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.465 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.465 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.465 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.465 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.466 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.466 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.466 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.466 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.466 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.466 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.466 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.467 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.467 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.467 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.467 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.467 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.467 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.468 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.468 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.468 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.468 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.468 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.468 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.468 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.469 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.469 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.469 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.469 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.469 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.469 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.469 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.470 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.470 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.470 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.470 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.470 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.470 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.470 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.471 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.471 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.471 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.471 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.471 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.471 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.472 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.472 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.472 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.472 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.472 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.472 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.472 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.473 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.473 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.473 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.473 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.473 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.473 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.473 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.474 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.474 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.474 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.474 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.474 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.474 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.475 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.475 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.475 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.475 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.475 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.475 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.475 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.476 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.476 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.476 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.476 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.476 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.476 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.476 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.477 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.477 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.477 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.477 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.477 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.477 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.478 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.478 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.478 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.478 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.478 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.478 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.479 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.479 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.479 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.479 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.479 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.479 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.479 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.479 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.480 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.480 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.480 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.480 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.480 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.480 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.480 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.481 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.481 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.481 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.481 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.481 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.481 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.482 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.482 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.482 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.482 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.482 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.483 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.483 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.483 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.483 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.483 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.484 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.484 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.484 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.484 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.484 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.484 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.484 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.485 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.485 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.485 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.485 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.485 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.485 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.486 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.486 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.486 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.486 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.486 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.486 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.487 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.487 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.487 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.487 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.487 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.487 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.487 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.488 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.488 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.488 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.488 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.488 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.489 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.489 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.489 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.489 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.489 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.490 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.490 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.490 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.490 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.490 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.491 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.491 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.491 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.491 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.492 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.492 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.492 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.492 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.492 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.492 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.493 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.493 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.493 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.493 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.493 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.493 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.493 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.494 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.494 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.494 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.494 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.494 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.494 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.495 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.495 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.495 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.495 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.495 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.495 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.496 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.496 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.496 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.496 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.496 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.497 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.497 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.497 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.497 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.497 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.497 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.497 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.498 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.498 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.498 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.498 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.498 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.498 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.498 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.499 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.499 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.499 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.499 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.499 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.499 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.499 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.500 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.500 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.500 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.500 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.500 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.500 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.500 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.501 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.501 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.501 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.501 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.501 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.501 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.501 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.501 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.502 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.502 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.502 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.502 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.502 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.502 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.502 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.503 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.503 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.503 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.503 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.503 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.503 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.503 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.503 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.504 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.504 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.504 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.504 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.504 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.504 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.504 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.505 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.505 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.505 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.505 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.505 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.505 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.505 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.506 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.506 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.506 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.506 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.506 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.506 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.506 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.507 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.507 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.507 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.507 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.507 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.507 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.507 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.507 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.508 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.508 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.508 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.508 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.508 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.508 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.508 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.509 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.509 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.509 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.509 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.509 189495 DEBUG oslo_service.service [None req-2bcaabf2-ddae-46fe-8ffe-0bb6d44aad45 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.510 189495 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.527 189495 DEBUG nova.virt.libvirt.host [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.528 189495 DEBUG nova.virt.libvirt.host [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.528 189495 DEBUG nova.virt.libvirt.host [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.529 189495 DEBUG nova.virt.libvirt.host [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.541 189495 DEBUG nova.virt.libvirt.host [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f268cc462e0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.543 189495 DEBUG nova.virt.libvirt.host [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f268cc462e0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.543 189495 INFO nova.virt.libvirt.driver [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] Connection event '1' reason 'None'#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.549 189495 INFO nova.virt.libvirt.host [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] Libvirt host capabilities <capabilities>
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 
Dec  1 04:07:10 np0005540697 nova_compute[189491]:  <host>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <uuid>8504d282-d8be-435b-9f17-042283c7909f</uuid>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <cpu>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <arch>x86_64</arch>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model>EPYC-Rome-v4</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <vendor>AMD</vendor>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <microcode version='16777317'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <signature family='23' model='49' stepping='0'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <maxphysaddr mode='emulate' bits='40'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <feature name='x2apic'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <feature name='tsc-deadline'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <feature name='osxsave'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <feature name='hypervisor'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <feature name='tsc_adjust'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <feature name='spec-ctrl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <feature name='stibp'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <feature name='arch-capabilities'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <feature name='ssbd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <feature name='cmp_legacy'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <feature name='topoext'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <feature name='virt-ssbd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <feature name='lbrv'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <feature name='tsc-scale'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <feature name='vmcb-clean'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <feature name='pause-filter'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <feature name='pfthreshold'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <feature name='svme-addr-chk'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <feature name='rdctl-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <feature name='skip-l1dfl-vmentry'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <feature name='mds-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <feature name='pschange-mc-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <pages unit='KiB' size='4'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <pages unit='KiB' size='2048'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <pages unit='KiB' size='1048576'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    </cpu>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <power_management>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <suspend_mem/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <suspend_disk/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <suspend_hybrid/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    </power_management>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <iommu support='no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <migration_features>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <live/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <uri_transports>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <uri_transport>tcp</uri_transport>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <uri_transport>rdma</uri_transport>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </uri_transports>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    </migration_features>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <topology>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <cells num='1'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <cell id='0'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:          <memory unit='KiB'>7864324</memory>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:          <pages unit='KiB' size='4'>1966081</pages>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:          <pages unit='KiB' size='2048'>0</pages>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:          <pages unit='KiB' size='1048576'>0</pages>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:          <distances>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:            <sibling id='0' value='10'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:          </distances>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:          <cpus num='8'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:          </cpus>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        </cell>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </cells>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    </topology>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <cache>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    </cache>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <secmodel>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model>selinux</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <doi>0</doi>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    </secmodel>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <secmodel>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model>dac</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <doi>0</doi>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <baselabel type='kvm'>+107:+107</baselabel>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <baselabel type='qemu'>+107:+107</baselabel>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    </secmodel>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:  </host>
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 
Dec  1 04:07:10 np0005540697 nova_compute[189491]:  <guest>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <os_type>hvm</os_type>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <arch name='i686'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <wordsize>32</wordsize>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <domain type='qemu'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <domain type='kvm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    </arch>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <features>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <pae/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <nonpae/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <acpi default='on' toggle='yes'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <apic default='on' toggle='no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <cpuselection/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <deviceboot/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <disksnapshot default='on' toggle='no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <externalSnapshot/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    </features>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:  </guest>
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 
Dec  1 04:07:10 np0005540697 nova_compute[189491]:  <guest>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <os_type>hvm</os_type>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <arch name='x86_64'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <wordsize>64</wordsize>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <domain type='qemu'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <domain type='kvm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    </arch>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <features>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <acpi default='on' toggle='yes'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <apic default='on' toggle='no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <cpuselection/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <deviceboot/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <disksnapshot default='on' toggle='no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <externalSnapshot/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    </features>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:  </guest>
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 
Dec  1 04:07:10 np0005540697 nova_compute[189491]: </capabilities>
Dec  1 04:07:10 np0005540697 nova_compute[189491]: #033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.555 189495 WARNING nova.virt.libvirt.driver [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.555 189495 DEBUG nova.virt.libvirt.volume.mount [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.556 189495 DEBUG nova.virt.libvirt.host [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.560 189495 DEBUG nova.virt.libvirt.host [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Dec  1 04:07:10 np0005540697 nova_compute[189491]: <domainCapabilities>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:  <path>/usr/libexec/qemu-kvm</path>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:  <domain>kvm</domain>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:  <machine>pc-i440fx-rhel7.6.0</machine>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:  <arch>i686</arch>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:  <vcpu max='240'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:  <iothreads supported='yes'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:  <os supported='yes'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <enum name='firmware'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <loader supported='yes'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <enum name='type'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>rom</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>pflash</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </enum>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <enum name='readonly'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>yes</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>no</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </enum>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <enum name='secure'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>no</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </enum>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    </loader>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:  </os>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:  <cpu>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <mode name='host-passthrough' supported='yes'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <enum name='hostPassthroughMigratable'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>on</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>off</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </enum>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    </mode>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <mode name='maximum' supported='yes'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <enum name='maximumMigratable'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>on</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>off</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </enum>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    </mode>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <mode name='host-model' supported='yes'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model fallback='forbid'>EPYC-Rome</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <vendor>AMD</vendor>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <maxphysaddr mode='passthrough' limit='40'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <feature policy='require' name='x2apic'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <feature policy='require' name='tsc-deadline'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <feature policy='require' name='hypervisor'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <feature policy='require' name='tsc_adjust'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <feature policy='require' name='spec-ctrl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <feature policy='require' name='stibp'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <feature policy='require' name='ssbd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <feature policy='require' name='cmp_legacy'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <feature policy='require' name='overflow-recov'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <feature policy='require' name='succor'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <feature policy='require' name='ibrs'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <feature policy='require' name='amd-ssbd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <feature policy='require' name='virt-ssbd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <feature policy='require' name='lbrv'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <feature policy='require' name='tsc-scale'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <feature policy='require' name='vmcb-clean'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <feature policy='require' name='flushbyasid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <feature policy='require' name='pause-filter'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <feature policy='require' name='pfthreshold'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <feature policy='require' name='svme-addr-chk'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <feature policy='require' name='lfence-always-serializing'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <feature policy='disable' name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    </mode>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <mode name='custom' supported='yes'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Broadwell'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Broadwell-IBRS'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Broadwell-noTSX'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Broadwell-noTSX-IBRS'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Broadwell-v1'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Broadwell-v2'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Broadwell-v3'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Broadwell-v4'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Cascadelake-Server'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Cascadelake-Server-noTSX'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ibrs-all'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Cascadelake-Server-v1'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Cascadelake-Server-v2'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ibrs-all'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Cascadelake-Server-v3'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ibrs-all'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Cascadelake-Server-v4'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ibrs-all'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Cascadelake-Server-v5'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ibrs-all'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Cooperlake'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-bf16'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ibrs-all'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='taa-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Cooperlake-v1'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-bf16'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ibrs-all'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='taa-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Cooperlake-v2'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-bf16'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ibrs-all'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='taa-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Denverton'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='mpx'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Denverton-v1'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='mpx'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Denverton-v2'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Denverton-v3'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Dhyana-v2'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='EPYC-Genoa'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='amd-psfd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='auto-ibrs'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-bf16'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bitalg'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512ifma'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='gfni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='la57'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='no-nested-data-bp'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='null-sel-clr-base'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='stibp-always-on'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vaes'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='EPYC-Genoa-v1'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='amd-psfd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='auto-ibrs'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-bf16'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bitalg'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512ifma'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='gfni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='la57'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='no-nested-data-bp'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='null-sel-clr-base'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='stibp-always-on'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vaes'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='EPYC-Milan'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='EPYC-Milan-v1'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='EPYC-Milan-v2'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='amd-psfd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='no-nested-data-bp'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='null-sel-clr-base'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='stibp-always-on'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vaes'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='EPYC-Rome'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='EPYC-Rome-v1'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='EPYC-Rome-v2'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='EPYC-Rome-v3'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='EPYC-v3'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='EPYC-v4'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='GraniteRapids'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='amx-bf16'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='amx-fp16'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='amx-int8'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='amx-tile'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx-vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-bf16'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-fp16'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bitalg'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512ifma'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='bus-lock-detect'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fbsdp-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrc'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrs'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fzrm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='gfni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ibrs-all'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='la57'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='mcdt-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pbrsb-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='prefetchiti'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='psdp-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='sbdr-ssdp-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='serialize'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='taa-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='tsx-ldtrk'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vaes'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xfd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='GraniteRapids-v1'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='amx-bf16'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='amx-fp16'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='amx-int8'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='amx-tile'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx-vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-bf16'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-fp16'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bitalg'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512ifma'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='bus-lock-detect'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fbsdp-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrc'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrs'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fzrm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='gfni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ibrs-all'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='la57'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='mcdt-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pbrsb-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='prefetchiti'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='psdp-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='sbdr-ssdp-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='serialize'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='taa-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='tsx-ldtrk'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vaes'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xfd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='GraniteRapids-v2'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='amx-bf16'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='amx-fp16'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='amx-int8'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='amx-tile'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx-vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx10'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx10-128'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx10-256'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx10-512'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-bf16'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-fp16'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bitalg'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512ifma'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='bus-lock-detect'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='cldemote'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fbsdp-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrc'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrs'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fzrm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='gfni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ibrs-all'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='la57'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='mcdt-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='movdir64b'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='movdiri'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pbrsb-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='prefetchiti'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='psdp-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='sbdr-ssdp-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='serialize'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ss'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='taa-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='tsx-ldtrk'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vaes'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xfd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Haswell'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Haswell-IBRS'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Haswell-noTSX'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Haswell-noTSX-IBRS'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Haswell-v1'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Haswell-v2'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Haswell-v3'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Haswell-v4'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Icelake-Server'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bitalg'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='gfni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='la57'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vaes'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Icelake-Server-noTSX'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bitalg'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='gfni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='la57'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vaes'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Icelake-Server-v1'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bitalg'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='gfni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='la57'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vaes'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Icelake-Server-v2'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bitalg'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='gfni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='la57'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vaes'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Icelake-Server-v3'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bitalg'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='gfni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ibrs-all'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='la57'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='taa-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vaes'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Icelake-Server-v4'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bitalg'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512ifma'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='gfni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ibrs-all'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='la57'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='taa-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vaes'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Icelake-Server-v5'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bitalg'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512ifma'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='gfni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ibrs-all'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='la57'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='taa-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vaes'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Icelake-Server-v6'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bitalg'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512ifma'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='gfni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ibrs-all'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='la57'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='taa-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vaes'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Icelake-Server-v7'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bitalg'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512ifma'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='gfni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ibrs-all'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='la57'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='taa-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vaes'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='IvyBridge'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='IvyBridge-IBRS'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='IvyBridge-v1'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='IvyBridge-v2'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='KnightsMill'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-4fmaps'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-4vnniw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512er'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512pf'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ss'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='KnightsMill-v1'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-4fmaps'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-4vnniw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512er'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512pf'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ss'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Opteron_G4'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fma4'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xop'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Opteron_G4-v1'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fma4'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xop'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Opteron_G5'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fma4'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='tbm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xop'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Opteron_G5-v1'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fma4'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='tbm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xop'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='SapphireRapids'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='amx-bf16'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='amx-int8'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='amx-tile'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx-vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-bf16'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-fp16'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bitalg'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512ifma'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='bus-lock-detect'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrc'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrs'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fzrm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='gfni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ibrs-all'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='la57'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='serialize'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='taa-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='tsx-ldtrk'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vaes'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xfd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='SapphireRapids-v1'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='amx-bf16'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='amx-int8'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='amx-tile'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx-vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-bf16'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-fp16'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bitalg'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512ifma'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='bus-lock-detect'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrc'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrs'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fzrm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='gfni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ibrs-all'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='la57'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='serialize'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='taa-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='tsx-ldtrk'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vaes'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xfd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='SapphireRapids-v2'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='amx-bf16'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='amx-int8'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='amx-tile'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx-vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-bf16'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-fp16'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bitalg'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512ifma'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='bus-lock-detect'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fbsdp-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrc'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrs'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fzrm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='gfni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ibrs-all'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='la57'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='psdp-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='sbdr-ssdp-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='serialize'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='taa-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='tsx-ldtrk'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vaes'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xfd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='SapphireRapids-v3'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='amx-bf16'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='amx-int8'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='amx-tile'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx-vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-bf16'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-fp16'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bitalg'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512ifma'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='bus-lock-detect'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='cldemote'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fbsdp-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrc'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrs'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fzrm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='gfni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ibrs-all'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='la57'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='movdir64b'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='movdiri'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='psdp-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='sbdr-ssdp-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='serialize'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ss'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='taa-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='tsx-ldtrk'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vaes'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xfd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='SierraForest'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx-ifma'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx-ne-convert'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx-vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx-vnni-int8'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='bus-lock-detect'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='cmpccxadd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fbsdp-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrs'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='gfni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ibrs-all'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='mcdt-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pbrsb-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='psdp-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='sbdr-ssdp-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='serialize'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vaes'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='SierraForest-v1'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx-ifma'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx-ne-convert'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx-vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx-vnni-int8'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='bus-lock-detect'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='cmpccxadd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fbsdp-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrs'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='gfni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ibrs-all'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='mcdt-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pbrsb-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='psdp-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='sbdr-ssdp-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='serialize'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vaes'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Skylake-Client'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Skylake-Client-IBRS'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Skylake-Client-v1'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Skylake-Client-v2'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Skylake-Client-v3'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Skylake-Client-v4'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Skylake-Server'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Skylake-Server-IBRS'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Skylake-Server-v1'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Skylake-Server-v2'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Skylake-Server-v3'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Skylake-Server-v4'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Skylake-Server-v5'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Snowridge'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='cldemote'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='core-capability'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='gfni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='movdir64b'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='movdiri'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='mpx'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='split-lock-detect'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Snowridge-v1'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='cldemote'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='core-capability'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='gfni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='movdir64b'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='movdiri'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='mpx'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='split-lock-detect'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Snowridge-v2'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='cldemote'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='core-capability'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='gfni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='movdir64b'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='movdiri'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='split-lock-detect'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Snowridge-v3'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='cldemote'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='core-capability'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='gfni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='movdir64b'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='movdiri'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='split-lock-detect'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Snowridge-v4'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='cldemote'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='gfni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='movdir64b'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='movdiri'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='athlon'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='3dnow'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='3dnowext'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='athlon-v1'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='3dnow'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='3dnowext'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='core2duo'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ss'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='core2duo-v1'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ss'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='coreduo'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ss'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='coreduo-v1'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ss'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='n270'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ss'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='n270-v1'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ss'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='phenom'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='3dnow'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='3dnowext'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='phenom-v1'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='3dnow'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='3dnowext'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    </mode>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:  </cpu>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:  <memoryBacking supported='yes'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <enum name='sourceType'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <value>file</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <value>anonymous</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <value>memfd</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    </enum>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:  </memoryBacking>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:  <devices>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <disk supported='yes'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <enum name='diskDevice'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>disk</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>cdrom</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>floppy</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>lun</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </enum>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <enum name='bus'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>ide</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>fdc</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>scsi</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>virtio</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>usb</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>sata</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </enum>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <enum name='model'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>virtio</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>virtio-transitional</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>virtio-non-transitional</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </enum>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    </disk>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <graphics supported='yes'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <enum name='type'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>vnc</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>egl-headless</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>dbus</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </enum>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    </graphics>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <video supported='yes'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <enum name='modelType'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>vga</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>cirrus</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>virtio</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>none</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>bochs</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>ramfb</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </enum>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    </video>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <hostdev supported='yes'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <enum name='mode'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>subsystem</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </enum>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <enum name='startupPolicy'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>default</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>mandatory</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>requisite</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>optional</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </enum>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <enum name='subsysType'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>usb</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>pci</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>scsi</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </enum>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <enum name='capsType'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <enum name='pciBackend'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    </hostdev>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <rng supported='yes'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <enum name='model'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>virtio</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>virtio-transitional</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>virtio-non-transitional</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </enum>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <enum name='backendModel'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>random</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>egd</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>builtin</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </enum>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    </rng>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <filesystem supported='yes'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <enum name='driverType'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>path</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>handle</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>virtiofs</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </enum>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    </filesystem>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <tpm supported='yes'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <enum name='model'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>tpm-tis</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>tpm-crb</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </enum>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <enum name='backendModel'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>emulator</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>external</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </enum>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <enum name='backendVersion'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>2.0</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </enum>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    </tpm>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <redirdev supported='yes'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <enum name='bus'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>usb</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </enum>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    </redirdev>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <channel supported='yes'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <enum name='type'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>pty</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>unix</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </enum>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    </channel>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <crypto supported='yes'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <enum name='model'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <enum name='type'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>qemu</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </enum>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <enum name='backendModel'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>builtin</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </enum>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    </crypto>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <interface supported='yes'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <enum name='backendType'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>default</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>passt</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </enum>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    </interface>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <panic supported='yes'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <enum name='model'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>isa</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>hyperv</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </enum>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    </panic>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <console supported='yes'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <enum name='type'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>null</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>vc</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>pty</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>dev</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>file</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>pipe</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>stdio</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>udp</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>tcp</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>unix</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>qemu-vdagent</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>dbus</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </enum>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    </console>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:  </devices>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:  <features>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <gic supported='no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <vmcoreinfo supported='yes'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <genid supported='yes'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <backingStoreInput supported='yes'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <backup supported='yes'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <async-teardown supported='yes'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <ps2 supported='yes'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <sev supported='no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <sgx supported='no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <hyperv supported='yes'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <enum name='features'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>relaxed</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>vapic</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>spinlocks</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>vpindex</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>runtime</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>synic</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>stimer</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>reset</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>vendor_id</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>frequencies</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>reenlightenment</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>tlbflush</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>ipi</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>avic</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>emsr_bitmap</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>xmm_input</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </enum>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <defaults>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <spinlocks>4095</spinlocks>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <stimer_direct>on</stimer_direct>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <tlbflush_direct>on</tlbflush_direct>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <tlbflush_extended>on</tlbflush_extended>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <vendor_id>Linux KVM Hv</vendor_id>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </defaults>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    </hyperv>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <launchSecurity supported='yes'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <enum name='sectype'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>tdx</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </enum>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    </launchSecurity>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:  </features>
Dec  1 04:07:10 np0005540697 nova_compute[189491]: </domainCapabilities>
Dec  1 04:07:10 np0005540697 nova_compute[189491]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.565 189495 DEBUG nova.virt.libvirt.host [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Dec  1 04:07:10 np0005540697 nova_compute[189491]: <domainCapabilities>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:  <path>/usr/libexec/qemu-kvm</path>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:  <domain>kvm</domain>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:  <machine>pc-q35-rhel9.8.0</machine>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:  <arch>i686</arch>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:  <vcpu max='4096'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:  <iothreads supported='yes'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:  <os supported='yes'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <enum name='firmware'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <loader supported='yes'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <enum name='type'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>rom</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>pflash</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </enum>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <enum name='readonly'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>yes</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>no</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </enum>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <enum name='secure'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>no</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </enum>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    </loader>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:  </os>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:  <cpu>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <mode name='host-passthrough' supported='yes'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <enum name='hostPassthroughMigratable'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>on</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>off</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </enum>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    </mode>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <mode name='maximum' supported='yes'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <enum name='maximumMigratable'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>on</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>off</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </enum>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    </mode>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <mode name='host-model' supported='yes'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model fallback='forbid'>EPYC-Rome</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <vendor>AMD</vendor>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <maxphysaddr mode='passthrough' limit='40'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <feature policy='require' name='x2apic'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <feature policy='require' name='tsc-deadline'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <feature policy='require' name='hypervisor'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <feature policy='require' name='tsc_adjust'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <feature policy='require' name='spec-ctrl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <feature policy='require' name='stibp'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <feature policy='require' name='ssbd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <feature policy='require' name='cmp_legacy'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <feature policy='require' name='overflow-recov'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <feature policy='require' name='succor'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <feature policy='require' name='ibrs'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <feature policy='require' name='amd-ssbd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <feature policy='require' name='virt-ssbd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <feature policy='require' name='lbrv'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <feature policy='require' name='tsc-scale'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <feature policy='require' name='vmcb-clean'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <feature policy='require' name='flushbyasid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <feature policy='require' name='pause-filter'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <feature policy='require' name='pfthreshold'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <feature policy='require' name='svme-addr-chk'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <feature policy='require' name='lfence-always-serializing'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <feature policy='disable' name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    </mode>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <mode name='custom' supported='yes'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Broadwell'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Broadwell-IBRS'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Broadwell-noTSX'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Broadwell-noTSX-IBRS'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Broadwell-v1'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Broadwell-v2'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Broadwell-v3'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Broadwell-v4'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Cascadelake-Server'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Cascadelake-Server-noTSX'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ibrs-all'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Cascadelake-Server-v1'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Cascadelake-Server-v2'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ibrs-all'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Cascadelake-Server-v3'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ibrs-all'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Cascadelake-Server-v4'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ibrs-all'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Cascadelake-Server-v5'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ibrs-all'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Cooperlake'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-bf16'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ibrs-all'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='taa-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Cooperlake-v1'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-bf16'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ibrs-all'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='taa-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Cooperlake-v2'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-bf16'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ibrs-all'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='taa-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Denverton'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='mpx'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Denverton-v1'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='mpx'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Denverton-v2'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Denverton-v3'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Dhyana-v2'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='EPYC-Genoa'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='amd-psfd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='auto-ibrs'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-bf16'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bitalg'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512ifma'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='gfni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='la57'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='no-nested-data-bp'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='null-sel-clr-base'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='stibp-always-on'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vaes'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='EPYC-Genoa-v1'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='amd-psfd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='auto-ibrs'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-bf16'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bitalg'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512ifma'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='gfni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='la57'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='no-nested-data-bp'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='null-sel-clr-base'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='stibp-always-on'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vaes'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='EPYC-Milan'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='EPYC-Milan-v1'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='EPYC-Milan-v2'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='amd-psfd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='no-nested-data-bp'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='null-sel-clr-base'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='stibp-always-on'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vaes'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='EPYC-Rome'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='EPYC-Rome-v1'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='EPYC-Rome-v2'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='EPYC-Rome-v3'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='EPYC-v3'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='EPYC-v4'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='GraniteRapids'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='amx-bf16'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='amx-fp16'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='amx-int8'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='amx-tile'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx-vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-bf16'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-fp16'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bitalg'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512ifma'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='bus-lock-detect'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fbsdp-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrc'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrs'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fzrm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='gfni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ibrs-all'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='la57'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='mcdt-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pbrsb-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='prefetchiti'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='psdp-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='sbdr-ssdp-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='serialize'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='taa-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='tsx-ldtrk'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vaes'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xfd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='GraniteRapids-v1'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='amx-bf16'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='amx-fp16'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='amx-int8'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='amx-tile'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx-vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-bf16'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-fp16'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bitalg'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512ifma'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='bus-lock-detect'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fbsdp-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrc'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrs'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fzrm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='gfni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ibrs-all'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='la57'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='mcdt-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pbrsb-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='prefetchiti'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='psdp-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='sbdr-ssdp-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='serialize'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='taa-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='tsx-ldtrk'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vaes'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xfd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='GraniteRapids-v2'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='amx-bf16'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='amx-fp16'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='amx-int8'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='amx-tile'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx-vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx10'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx10-128'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx10-256'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx10-512'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-bf16'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-fp16'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bitalg'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512ifma'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='bus-lock-detect'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='cldemote'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fbsdp-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrc'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrs'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fzrm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='gfni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ibrs-all'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='la57'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='mcdt-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='movdir64b'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='movdiri'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pbrsb-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='prefetchiti'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='psdp-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='sbdr-ssdp-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='serialize'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ss'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='taa-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='tsx-ldtrk'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vaes'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xfd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Haswell'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Haswell-IBRS'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Haswell-noTSX'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Haswell-noTSX-IBRS'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Haswell-v1'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Haswell-v2'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Haswell-v3'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Haswell-v4'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Icelake-Server'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bitalg'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='gfni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='la57'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vaes'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Icelake-Server-noTSX'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bitalg'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='gfni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='la57'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vaes'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Icelake-Server-v1'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bitalg'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='gfni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='la57'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vaes'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Icelake-Server-v2'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bitalg'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='gfni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='la57'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vaes'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Icelake-Server-v3'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bitalg'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='gfni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ibrs-all'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='la57'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='taa-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vaes'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Icelake-Server-v4'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bitalg'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512ifma'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='gfni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ibrs-all'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='la57'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='taa-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vaes'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Icelake-Server-v5'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bitalg'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512ifma'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='gfni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ibrs-all'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='la57'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='taa-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vaes'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Icelake-Server-v6'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bitalg'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512ifma'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='gfni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ibrs-all'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='la57'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='taa-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vaes'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Icelake-Server-v7'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bitalg'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512ifma'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='gfni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ibrs-all'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='la57'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='taa-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vaes'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='IvyBridge'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='IvyBridge-IBRS'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='IvyBridge-v1'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='IvyBridge-v2'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='KnightsMill'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-4fmaps'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-4vnniw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512er'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512pf'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ss'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='KnightsMill-v1'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-4fmaps'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-4vnniw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512er'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512pf'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ss'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Opteron_G4'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fma4'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xop'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Opteron_G4-v1'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fma4'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xop'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Opteron_G5'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fma4'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='tbm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xop'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Opteron_G5-v1'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fma4'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='tbm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xop'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='SapphireRapids'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='amx-bf16'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='amx-int8'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='amx-tile'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx-vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-bf16'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-fp16'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bitalg'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512ifma'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='bus-lock-detect'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrc'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrs'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fzrm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='gfni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ibrs-all'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='la57'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='serialize'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='taa-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='tsx-ldtrk'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vaes'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xfd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='SapphireRapids-v1'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='amx-bf16'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='amx-int8'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='amx-tile'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx-vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-bf16'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-fp16'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bitalg'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512ifma'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='bus-lock-detect'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrc'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrs'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fzrm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='gfni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ibrs-all'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='la57'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='serialize'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='taa-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='tsx-ldtrk'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vaes'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xfd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='SapphireRapids-v2'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='amx-bf16'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='amx-int8'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='amx-tile'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx-vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-bf16'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-fp16'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bitalg'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512ifma'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='bus-lock-detect'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fbsdp-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrc'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrs'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fzrm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='gfni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ibrs-all'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='la57'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='psdp-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='sbdr-ssdp-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='serialize'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='taa-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='tsx-ldtrk'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vaes'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xfd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='SapphireRapids-v3'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='amx-bf16'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='amx-int8'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='amx-tile'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx-vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-bf16'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-fp16'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bitalg'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512ifma'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='bus-lock-detect'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='cldemote'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fbsdp-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrc'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrs'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fzrm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='gfni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ibrs-all'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='la57'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='movdir64b'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='movdiri'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='psdp-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='sbdr-ssdp-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='serialize'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ss'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='taa-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='tsx-ldtrk'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vaes'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xfd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='SierraForest'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx-ifma'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx-ne-convert'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx-vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx-vnni-int8'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='bus-lock-detect'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='cmpccxadd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fbsdp-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrs'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='gfni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ibrs-all'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='mcdt-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pbrsb-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='psdp-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='sbdr-ssdp-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='serialize'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vaes'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='SierraForest-v1'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx-ifma'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx-ne-convert'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx-vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx-vnni-int8'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='bus-lock-detect'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='cmpccxadd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fbsdp-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrs'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='gfni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ibrs-all'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='mcdt-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pbrsb-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='psdp-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='sbdr-ssdp-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='serialize'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vaes'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Skylake-Client'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Skylake-Client-IBRS'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Skylake-Client-v1'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Skylake-Client-v2'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Skylake-Client-v3'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Skylake-Client-v4'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Skylake-Server'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Skylake-Server-IBRS'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Skylake-Server-v1'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Skylake-Server-v2'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Skylake-Server-v3'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Skylake-Server-v4'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Skylake-Server-v5'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Snowridge'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='cldemote'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='core-capability'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='gfni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='movdir64b'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='movdiri'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='mpx'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='split-lock-detect'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Snowridge-v1'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='cldemote'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='core-capability'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='gfni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='movdir64b'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='movdiri'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='mpx'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='split-lock-detect'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Snowridge-v2'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='cldemote'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='core-capability'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='gfni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='movdir64b'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='movdiri'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='split-lock-detect'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Snowridge-v3'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='cldemote'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='core-capability'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='gfni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='movdir64b'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='movdiri'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='split-lock-detect'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Snowridge-v4'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='cldemote'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='gfni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='movdir64b'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='movdiri'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='athlon'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='3dnow'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='3dnowext'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='athlon-v1'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='3dnow'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='3dnowext'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='core2duo'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ss'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='core2duo-v1'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ss'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='coreduo'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ss'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='coreduo-v1'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ss'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='n270'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ss'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='n270-v1'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ss'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='phenom'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='3dnow'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='3dnowext'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='phenom-v1'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='3dnow'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='3dnowext'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    </mode>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:  </cpu>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:  <memoryBacking supported='yes'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <enum name='sourceType'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <value>file</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <value>anonymous</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <value>memfd</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    </enum>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:  </memoryBacking>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:  <devices>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <disk supported='yes'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <enum name='diskDevice'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>disk</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>cdrom</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>floppy</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>lun</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </enum>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <enum name='bus'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>fdc</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>scsi</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>virtio</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>usb</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>sata</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </enum>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <enum name='model'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>virtio</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>virtio-transitional</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>virtio-non-transitional</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </enum>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    </disk>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <graphics supported='yes'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <enum name='type'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>vnc</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>egl-headless</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>dbus</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </enum>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    </graphics>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <video supported='yes'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <enum name='modelType'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>vga</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>cirrus</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>virtio</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>none</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>bochs</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>ramfb</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </enum>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    </video>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <hostdev supported='yes'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <enum name='mode'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>subsystem</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </enum>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <enum name='startupPolicy'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>default</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>mandatory</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>requisite</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>optional</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </enum>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <enum name='subsysType'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>usb</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>pci</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>scsi</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </enum>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <enum name='capsType'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <enum name='pciBackend'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    </hostdev>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <rng supported='yes'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <enum name='model'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>virtio</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>virtio-transitional</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>virtio-non-transitional</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </enum>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <enum name='backendModel'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>random</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>egd</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>builtin</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </enum>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    </rng>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <filesystem supported='yes'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <enum name='driverType'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>path</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>handle</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>virtiofs</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </enum>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    </filesystem>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <tpm supported='yes'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <enum name='model'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>tpm-tis</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>tpm-crb</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </enum>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <enum name='backendModel'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>emulator</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>external</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </enum>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <enum name='backendVersion'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>2.0</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </enum>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    </tpm>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <redirdev supported='yes'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <enum name='bus'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>usb</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </enum>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    </redirdev>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <channel supported='yes'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <enum name='type'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>pty</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>unix</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </enum>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    </channel>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <crypto supported='yes'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <enum name='model'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <enum name='type'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>qemu</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </enum>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <enum name='backendModel'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>builtin</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </enum>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    </crypto>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <interface supported='yes'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <enum name='backendType'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>default</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>passt</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </enum>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    </interface>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <panic supported='yes'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <enum name='model'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>isa</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>hyperv</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </enum>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    </panic>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <console supported='yes'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <enum name='type'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>null</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>vc</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>pty</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>dev</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>file</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>pipe</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>stdio</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>udp</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>tcp</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>unix</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>qemu-vdagent</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>dbus</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </enum>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    </console>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:  </devices>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:  <features>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <gic supported='no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <vmcoreinfo supported='yes'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <genid supported='yes'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <backingStoreInput supported='yes'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <backup supported='yes'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <async-teardown supported='yes'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <ps2 supported='yes'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <sev supported='no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <sgx supported='no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <hyperv supported='yes'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <enum name='features'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>relaxed</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>vapic</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>spinlocks</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>vpindex</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>runtime</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>synic</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>stimer</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>reset</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>vendor_id</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>frequencies</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>reenlightenment</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>tlbflush</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>ipi</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>avic</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>emsr_bitmap</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>xmm_input</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </enum>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <defaults>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <spinlocks>4095</spinlocks>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <stimer_direct>on</stimer_direct>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <tlbflush_direct>on</tlbflush_direct>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <tlbflush_extended>on</tlbflush_extended>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <vendor_id>Linux KVM Hv</vendor_id>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </defaults>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    </hyperv>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <launchSecurity supported='yes'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <enum name='sectype'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>tdx</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </enum>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    </launchSecurity>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:  </features>
Dec  1 04:07:10 np0005540697 nova_compute[189491]: </domainCapabilities>
Dec  1 04:07:10 np0005540697 nova_compute[189491]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.591 189495 DEBUG nova.virt.libvirt.host [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.596 189495 DEBUG nova.virt.libvirt.host [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Dec  1 04:07:10 np0005540697 nova_compute[189491]: <domainCapabilities>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:  <path>/usr/libexec/qemu-kvm</path>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:  <domain>kvm</domain>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:  <machine>pc-q35-rhel9.8.0</machine>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:  <arch>x86_64</arch>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:  <vcpu max='4096'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:  <iothreads supported='yes'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:  <os supported='yes'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <enum name='firmware'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <value>efi</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    </enum>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <loader supported='yes'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <enum name='type'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>rom</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>pflash</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </enum>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <enum name='readonly'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>yes</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>no</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </enum>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <enum name='secure'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>yes</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>no</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </enum>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    </loader>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:  </os>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:  <cpu>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <mode name='host-passthrough' supported='yes'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <enum name='hostPassthroughMigratable'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>on</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>off</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </enum>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    </mode>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <mode name='maximum' supported='yes'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <enum name='maximumMigratable'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>on</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>off</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </enum>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    </mode>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <mode name='host-model' supported='yes'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model fallback='forbid'>EPYC-Rome</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <vendor>AMD</vendor>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <maxphysaddr mode='passthrough' limit='40'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <feature policy='require' name='x2apic'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <feature policy='require' name='tsc-deadline'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <feature policy='require' name='hypervisor'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <feature policy='require' name='tsc_adjust'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <feature policy='require' name='spec-ctrl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <feature policy='require' name='stibp'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <feature policy='require' name='ssbd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <feature policy='require' name='cmp_legacy'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <feature policy='require' name='overflow-recov'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <feature policy='require' name='succor'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <feature policy='require' name='ibrs'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <feature policy='require' name='amd-ssbd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <feature policy='require' name='virt-ssbd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <feature policy='require' name='lbrv'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <feature policy='require' name='tsc-scale'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <feature policy='require' name='vmcb-clean'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <feature policy='require' name='flushbyasid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <feature policy='require' name='pause-filter'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <feature policy='require' name='pfthreshold'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <feature policy='require' name='svme-addr-chk'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <feature policy='require' name='lfence-always-serializing'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <feature policy='disable' name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    </mode>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <mode name='custom' supported='yes'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Broadwell'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Broadwell-IBRS'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Broadwell-noTSX'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Broadwell-noTSX-IBRS'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Broadwell-v1'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Broadwell-v2'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Broadwell-v3'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Broadwell-v4'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Cascadelake-Server'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Cascadelake-Server-noTSX'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ibrs-all'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Cascadelake-Server-v1'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Cascadelake-Server-v2'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ibrs-all'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Cascadelake-Server-v3'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ibrs-all'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Cascadelake-Server-v4'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ibrs-all'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Cascadelake-Server-v5'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ibrs-all'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Cooperlake'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-bf16'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ibrs-all'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='taa-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Cooperlake-v1'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-bf16'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ibrs-all'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='taa-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Cooperlake-v2'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-bf16'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ibrs-all'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='taa-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Denverton'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='mpx'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Denverton-v1'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='mpx'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Denverton-v2'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Denverton-v3'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Dhyana-v2'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='EPYC-Genoa'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='amd-psfd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='auto-ibrs'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-bf16'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bitalg'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512ifma'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='gfni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='la57'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='no-nested-data-bp'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='null-sel-clr-base'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='stibp-always-on'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vaes'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='EPYC-Genoa-v1'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='amd-psfd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='auto-ibrs'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-bf16'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bitalg'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512ifma'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='gfni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='la57'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='no-nested-data-bp'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='null-sel-clr-base'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='stibp-always-on'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vaes'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='EPYC-Milan'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='EPYC-Milan-v1'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='EPYC-Milan-v2'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='amd-psfd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='no-nested-data-bp'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='null-sel-clr-base'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='stibp-always-on'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vaes'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='EPYC-Rome'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='EPYC-Rome-v1'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='EPYC-Rome-v2'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='EPYC-Rome-v3'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='EPYC-v3'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='EPYC-v4'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='GraniteRapids'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='amx-bf16'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='amx-fp16'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='amx-int8'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='amx-tile'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx-vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-bf16'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-fp16'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bitalg'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512ifma'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='bus-lock-detect'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fbsdp-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrc'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrs'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fzrm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='gfni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ibrs-all'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='la57'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='mcdt-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pbrsb-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='prefetchiti'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='psdp-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='sbdr-ssdp-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='serialize'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='taa-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='tsx-ldtrk'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vaes'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xfd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='GraniteRapids-v1'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='amx-bf16'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='amx-fp16'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='amx-int8'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='amx-tile'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx-vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-bf16'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-fp16'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bitalg'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512ifma'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='bus-lock-detect'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fbsdp-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrc'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrs'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fzrm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='gfni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ibrs-all'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='la57'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='mcdt-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pbrsb-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='prefetchiti'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='psdp-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='sbdr-ssdp-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='serialize'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='taa-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='tsx-ldtrk'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vaes'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xfd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='GraniteRapids-v2'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='amx-bf16'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='amx-fp16'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='amx-int8'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='amx-tile'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx-vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx10'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx10-128'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx10-256'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx10-512'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-bf16'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-fp16'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bitalg'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512ifma'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='bus-lock-detect'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='cldemote'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fbsdp-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrc'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrs'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fzrm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='gfni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ibrs-all'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='la57'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='mcdt-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='movdir64b'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='movdiri'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pbrsb-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='prefetchiti'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='psdp-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='sbdr-ssdp-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='serialize'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ss'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='taa-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='tsx-ldtrk'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vaes'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xfd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Haswell'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Haswell-IBRS'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Haswell-noTSX'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Haswell-noTSX-IBRS'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Haswell-v1'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Haswell-v2'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Haswell-v3'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Haswell-v4'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Icelake-Server'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bitalg'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='gfni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='la57'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vaes'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Icelake-Server-noTSX'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bitalg'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='gfni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='la57'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vaes'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Icelake-Server-v1'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bitalg'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='gfni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='la57'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vaes'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Icelake-Server-v2'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bitalg'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='gfni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='la57'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vaes'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Icelake-Server-v3'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bitalg'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='gfni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ibrs-all'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='la57'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='taa-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vaes'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Icelake-Server-v4'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bitalg'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512ifma'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='gfni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ibrs-all'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='la57'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='taa-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vaes'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Icelake-Server-v5'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bitalg'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512ifma'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='gfni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ibrs-all'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='la57'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='taa-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vaes'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Icelake-Server-v6'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bitalg'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512ifma'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='gfni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ibrs-all'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='la57'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='taa-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vaes'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Icelake-Server-v7'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bitalg'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512ifma'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='gfni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ibrs-all'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='la57'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='taa-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vaes'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='IvyBridge'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='IvyBridge-IBRS'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='IvyBridge-v1'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='IvyBridge-v2'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='KnightsMill'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-4fmaps'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-4vnniw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512er'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512pf'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ss'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='KnightsMill-v1'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-4fmaps'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-4vnniw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512er'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512pf'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ss'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Opteron_G4'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fma4'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xop'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Opteron_G4-v1'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fma4'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xop'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Opteron_G5'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fma4'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='tbm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xop'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Opteron_G5-v1'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fma4'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='tbm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xop'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='SapphireRapids'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='amx-bf16'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='amx-int8'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='amx-tile'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx-vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-bf16'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-fp16'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bitalg'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512ifma'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='bus-lock-detect'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrc'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrs'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fzrm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='gfni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ibrs-all'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='la57'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='serialize'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='taa-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='tsx-ldtrk'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vaes'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xfd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='SapphireRapids-v1'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='amx-bf16'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='amx-int8'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='amx-tile'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx-vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-bf16'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-fp16'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bitalg'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512ifma'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='bus-lock-detect'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrc'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrs'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fzrm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='gfni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ibrs-all'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='la57'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='serialize'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='taa-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='tsx-ldtrk'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vaes'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xfd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='SapphireRapids-v2'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='amx-bf16'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='amx-int8'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='amx-tile'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx-vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-bf16'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-fp16'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bitalg'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512ifma'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='bus-lock-detect'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fbsdp-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrc'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrs'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fzrm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='gfni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ibrs-all'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='la57'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='psdp-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='sbdr-ssdp-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='serialize'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='taa-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='tsx-ldtrk'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vaes'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xfd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='SapphireRapids-v3'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='amx-bf16'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='amx-int8'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='amx-tile'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx-vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-bf16'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-fp16'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bitalg'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512ifma'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='bus-lock-detect'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='cldemote'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fbsdp-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrc'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrs'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fzrm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='gfni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ibrs-all'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='la57'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='movdir64b'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='movdiri'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='psdp-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='sbdr-ssdp-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='serialize'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ss'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='taa-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='tsx-ldtrk'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vaes'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xfd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='SierraForest'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx-ifma'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx-ne-convert'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx-vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx-vnni-int8'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='bus-lock-detect'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='cmpccxadd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fbsdp-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrs'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='gfni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ibrs-all'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='mcdt-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pbrsb-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='psdp-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='sbdr-ssdp-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='serialize'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vaes'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='SierraForest-v1'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx-ifma'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx-ne-convert'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx-vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx-vnni-int8'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='bus-lock-detect'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='cmpccxadd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fbsdp-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrs'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='gfni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ibrs-all'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='mcdt-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pbrsb-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='psdp-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='sbdr-ssdp-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='serialize'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vaes'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Skylake-Client'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Skylake-Client-IBRS'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Skylake-Client-v1'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Skylake-Client-v2'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Skylake-Client-v3'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Skylake-Client-v4'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Skylake-Server'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Skylake-Server-IBRS'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Skylake-Server-v1'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Skylake-Server-v2'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Skylake-Server-v3'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Skylake-Server-v4'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Skylake-Server-v5'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Snowridge'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='cldemote'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='core-capability'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='gfni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='movdir64b'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='movdiri'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='mpx'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='split-lock-detect'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Snowridge-v1'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='cldemote'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='core-capability'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='gfni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='movdir64b'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='movdiri'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='mpx'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='split-lock-detect'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Snowridge-v2'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='cldemote'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='core-capability'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='gfni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='movdir64b'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='movdiri'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='split-lock-detect'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Snowridge-v3'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='cldemote'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='core-capability'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='gfni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='movdir64b'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='movdiri'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='split-lock-detect'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Snowridge-v4'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='cldemote'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='gfni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='movdir64b'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='movdiri'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='athlon'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='3dnow'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='3dnowext'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='athlon-v1'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='3dnow'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='3dnowext'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='core2duo'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ss'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='core2duo-v1'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ss'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='coreduo'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ss'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='coreduo-v1'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ss'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='n270'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ss'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='n270-v1'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ss'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='phenom'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='3dnow'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='3dnowext'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='phenom-v1'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='3dnow'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='3dnowext'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    </mode>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:  </cpu>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:  <memoryBacking supported='yes'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <enum name='sourceType'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <value>file</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <value>anonymous</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <value>memfd</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    </enum>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:  </memoryBacking>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:  <devices>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <disk supported='yes'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <enum name='diskDevice'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>disk</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>cdrom</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>floppy</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>lun</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </enum>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <enum name='bus'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>fdc</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>scsi</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>virtio</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>usb</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>sata</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </enum>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <enum name='model'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>virtio</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>virtio-transitional</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>virtio-non-transitional</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </enum>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    </disk>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <graphics supported='yes'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <enum name='type'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>vnc</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>egl-headless</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>dbus</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </enum>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    </graphics>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <video supported='yes'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <enum name='modelType'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>vga</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>cirrus</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>virtio</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>none</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>bochs</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>ramfb</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </enum>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    </video>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <hostdev supported='yes'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <enum name='mode'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>subsystem</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </enum>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <enum name='startupPolicy'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>default</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>mandatory</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>requisite</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>optional</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </enum>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <enum name='subsysType'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>usb</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>pci</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>scsi</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </enum>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <enum name='capsType'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <enum name='pciBackend'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    </hostdev>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <rng supported='yes'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <enum name='model'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>virtio</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>virtio-transitional</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>virtio-non-transitional</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </enum>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <enum name='backendModel'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>random</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>egd</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>builtin</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </enum>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    </rng>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <filesystem supported='yes'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <enum name='driverType'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>path</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>handle</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>virtiofs</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </enum>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    </filesystem>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <tpm supported='yes'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <enum name='model'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>tpm-tis</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>tpm-crb</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </enum>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <enum name='backendModel'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>emulator</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>external</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </enum>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <enum name='backendVersion'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>2.0</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </enum>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    </tpm>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <redirdev supported='yes'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <enum name='bus'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>usb</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </enum>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    </redirdev>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <channel supported='yes'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <enum name='type'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>pty</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>unix</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </enum>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    </channel>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <crypto supported='yes'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <enum name='model'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <enum name='type'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>qemu</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </enum>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <enum name='backendModel'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>builtin</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </enum>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    </crypto>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <interface supported='yes'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <enum name='backendType'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>default</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>passt</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </enum>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    </interface>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <panic supported='yes'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <enum name='model'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>isa</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>hyperv</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </enum>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    </panic>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <console supported='yes'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <enum name='type'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>null</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>vc</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>pty</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>dev</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>file</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>pipe</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>stdio</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>udp</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>tcp</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>unix</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>qemu-vdagent</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>dbus</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </enum>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    </console>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:  </devices>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:  <features>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <gic supported='no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <vmcoreinfo supported='yes'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <genid supported='yes'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <backingStoreInput supported='yes'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <backup supported='yes'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <async-teardown supported='yes'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <ps2 supported='yes'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <sev supported='no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <sgx supported='no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <hyperv supported='yes'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <enum name='features'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>relaxed</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>vapic</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>spinlocks</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>vpindex</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>runtime</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>synic</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>stimer</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>reset</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>vendor_id</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>frequencies</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>reenlightenment</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>tlbflush</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>ipi</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>avic</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>emsr_bitmap</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>xmm_input</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </enum>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <defaults>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <spinlocks>4095</spinlocks>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <stimer_direct>on</stimer_direct>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <tlbflush_direct>on</tlbflush_direct>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <tlbflush_extended>on</tlbflush_extended>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <vendor_id>Linux KVM Hv</vendor_id>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </defaults>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    </hyperv>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <launchSecurity supported='yes'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <enum name='sectype'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>tdx</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </enum>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    </launchSecurity>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:  </features>
Dec  1 04:07:10 np0005540697 nova_compute[189491]: </domainCapabilities>
Dec  1 04:07:10 np0005540697 nova_compute[189491]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.655 189495 DEBUG nova.virt.libvirt.host [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Dec  1 04:07:10 np0005540697 nova_compute[189491]: <domainCapabilities>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:  <path>/usr/libexec/qemu-kvm</path>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:  <domain>kvm</domain>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:  <machine>pc-i440fx-rhel7.6.0</machine>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:  <arch>x86_64</arch>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:  <vcpu max='240'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:  <iothreads supported='yes'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:  <os supported='yes'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <enum name='firmware'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <loader supported='yes'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <enum name='type'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>rom</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>pflash</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </enum>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <enum name='readonly'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>yes</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>no</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </enum>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <enum name='secure'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>no</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </enum>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    </loader>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:  </os>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:  <cpu>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <mode name='host-passthrough' supported='yes'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <enum name='hostPassthroughMigratable'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>on</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>off</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </enum>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    </mode>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <mode name='maximum' supported='yes'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <enum name='maximumMigratable'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>on</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>off</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </enum>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    </mode>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <mode name='host-model' supported='yes'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model fallback='forbid'>EPYC-Rome</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <vendor>AMD</vendor>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <maxphysaddr mode='passthrough' limit='40'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <feature policy='require' name='x2apic'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <feature policy='require' name='tsc-deadline'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <feature policy='require' name='hypervisor'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <feature policy='require' name='tsc_adjust'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <feature policy='require' name='spec-ctrl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <feature policy='require' name='stibp'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <feature policy='require' name='ssbd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <feature policy='require' name='cmp_legacy'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <feature policy='require' name='overflow-recov'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <feature policy='require' name='succor'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <feature policy='require' name='ibrs'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <feature policy='require' name='amd-ssbd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <feature policy='require' name='virt-ssbd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <feature policy='require' name='lbrv'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <feature policy='require' name='tsc-scale'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <feature policy='require' name='vmcb-clean'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <feature policy='require' name='flushbyasid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <feature policy='require' name='pause-filter'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <feature policy='require' name='pfthreshold'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <feature policy='require' name='svme-addr-chk'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <feature policy='require' name='lfence-always-serializing'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <feature policy='disable' name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    </mode>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <mode name='custom' supported='yes'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Broadwell'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Broadwell-IBRS'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Broadwell-noTSX'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Broadwell-noTSX-IBRS'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Broadwell-v1'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Broadwell-v2'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Broadwell-v3'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Broadwell-v4'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Cascadelake-Server'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Cascadelake-Server-noTSX'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ibrs-all'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Cascadelake-Server-v1'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Cascadelake-Server-v2'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ibrs-all'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Cascadelake-Server-v3'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ibrs-all'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Cascadelake-Server-v4'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ibrs-all'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Cascadelake-Server-v5'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ibrs-all'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Cooperlake'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-bf16'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ibrs-all'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='taa-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Cooperlake-v1'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-bf16'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ibrs-all'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='taa-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Cooperlake-v2'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-bf16'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ibrs-all'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='taa-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Denverton'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='mpx'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Denverton-v1'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='mpx'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Denverton-v2'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Denverton-v3'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Dhyana-v2'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='EPYC-Genoa'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='amd-psfd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='auto-ibrs'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-bf16'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bitalg'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512ifma'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='gfni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='la57'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='no-nested-data-bp'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='null-sel-clr-base'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='stibp-always-on'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vaes'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='EPYC-Genoa-v1'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='amd-psfd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='auto-ibrs'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-bf16'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bitalg'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512ifma'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='gfni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='la57'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='no-nested-data-bp'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='null-sel-clr-base'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='stibp-always-on'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vaes'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='EPYC-Milan'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='EPYC-Milan-v1'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='EPYC-Milan-v2'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='amd-psfd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='no-nested-data-bp'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='null-sel-clr-base'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='stibp-always-on'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vaes'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='EPYC-Rome'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='EPYC-Rome-v1'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='EPYC-Rome-v2'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='EPYC-Rome-v3'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='EPYC-v3'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='EPYC-v4'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='GraniteRapids'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='amx-bf16'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='amx-fp16'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='amx-int8'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='amx-tile'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx-vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-bf16'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-fp16'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bitalg'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512ifma'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='bus-lock-detect'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fbsdp-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrc'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrs'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fzrm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='gfni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ibrs-all'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='la57'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='mcdt-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pbrsb-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='prefetchiti'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='psdp-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='sbdr-ssdp-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='serialize'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='taa-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='tsx-ldtrk'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vaes'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xfd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='GraniteRapids-v1'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='amx-bf16'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='amx-fp16'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='amx-int8'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='amx-tile'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx-vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-bf16'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-fp16'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bitalg'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512ifma'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='bus-lock-detect'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fbsdp-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrc'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrs'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fzrm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='gfni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ibrs-all'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='la57'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='mcdt-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pbrsb-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='prefetchiti'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='psdp-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='sbdr-ssdp-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='serialize'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='taa-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='tsx-ldtrk'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vaes'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xfd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='GraniteRapids-v2'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='amx-bf16'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='amx-fp16'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='amx-int8'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='amx-tile'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx-vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx10'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx10-128'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx10-256'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx10-512'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-bf16'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-fp16'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bitalg'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512ifma'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='bus-lock-detect'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='cldemote'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fbsdp-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrc'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrs'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fzrm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='gfni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ibrs-all'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='la57'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='mcdt-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='movdir64b'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='movdiri'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pbrsb-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='prefetchiti'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='psdp-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='sbdr-ssdp-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='serialize'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ss'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='taa-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='tsx-ldtrk'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vaes'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xfd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Haswell'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Haswell-IBRS'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Haswell-noTSX'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Haswell-noTSX-IBRS'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Haswell-v1'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Haswell-v2'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Haswell-v3'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Haswell-v4'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Icelake-Server'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bitalg'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='gfni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='la57'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vaes'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Icelake-Server-noTSX'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bitalg'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='gfni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='la57'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vaes'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Icelake-Server-v1'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bitalg'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='gfni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='la57'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vaes'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Icelake-Server-v2'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bitalg'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='gfni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='la57'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vaes'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Icelake-Server-v3'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bitalg'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='gfni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ibrs-all'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='la57'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='taa-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vaes'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Icelake-Server-v4'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bitalg'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512ifma'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='gfni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ibrs-all'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='la57'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='taa-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vaes'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Icelake-Server-v5'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bitalg'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512ifma'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='gfni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ibrs-all'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='la57'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='taa-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vaes'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Icelake-Server-v6'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bitalg'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512ifma'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='gfni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ibrs-all'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='la57'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='taa-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vaes'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Icelake-Server-v7'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bitalg'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512ifma'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='gfni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ibrs-all'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='la57'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='taa-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vaes'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='IvyBridge'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='IvyBridge-IBRS'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='IvyBridge-v1'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='IvyBridge-v2'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='KnightsMill'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-4fmaps'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-4vnniw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512er'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512pf'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ss'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='KnightsMill-v1'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-4fmaps'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-4vnniw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512er'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512pf'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ss'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Opteron_G4'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fma4'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xop'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Opteron_G4-v1'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fma4'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xop'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Opteron_G5'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fma4'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='tbm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xop'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Opteron_G5-v1'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fma4'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='tbm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xop'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='SapphireRapids'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='amx-bf16'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='amx-int8'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='amx-tile'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx-vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-bf16'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-fp16'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bitalg'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512ifma'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='bus-lock-detect'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrc'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrs'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fzrm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='gfni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ibrs-all'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='la57'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='serialize'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='taa-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='tsx-ldtrk'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vaes'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xfd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='SapphireRapids-v1'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='amx-bf16'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='amx-int8'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='amx-tile'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx-vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-bf16'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-fp16'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bitalg'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512ifma'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='bus-lock-detect'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrc'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrs'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fzrm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='gfni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ibrs-all'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='la57'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='serialize'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='taa-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='tsx-ldtrk'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vaes'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xfd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='SapphireRapids-v2'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='amx-bf16'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='amx-int8'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='amx-tile'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx-vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-bf16'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-fp16'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bitalg'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512ifma'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='bus-lock-detect'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fbsdp-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrc'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrs'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fzrm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='gfni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ibrs-all'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='la57'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='psdp-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='sbdr-ssdp-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='serialize'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='taa-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='tsx-ldtrk'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vaes'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xfd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='SapphireRapids-v3'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='amx-bf16'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='amx-int8'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='amx-tile'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx-vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-bf16'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-fp16'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512-vpopcntdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bitalg'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512ifma'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vbmi2'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='bus-lock-detect'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='cldemote'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fbsdp-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrc'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrs'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fzrm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='gfni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ibrs-all'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='la57'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='movdir64b'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='movdiri'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='psdp-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='sbdr-ssdp-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='serialize'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ss'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='taa-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='tsx-ldtrk'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vaes'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xfd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='SierraForest'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx-ifma'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx-ne-convert'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx-vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx-vnni-int8'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='bus-lock-detect'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='cmpccxadd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fbsdp-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrs'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='gfni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ibrs-all'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='mcdt-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pbrsb-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='psdp-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='sbdr-ssdp-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='serialize'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vaes'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='SierraForest-v1'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx-ifma'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx-ne-convert'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx-vnni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx-vnni-int8'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='bus-lock-detect'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='cmpccxadd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fbsdp-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='fsrs'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='gfni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ibrs-all'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='mcdt-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pbrsb-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='psdp-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='sbdr-ssdp-no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='serialize'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vaes'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='vpclmulqdq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Skylake-Client'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Skylake-Client-IBRS'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Skylake-Client-v1'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Skylake-Client-v2'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Skylake-Client-v3'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Skylake-Client-v4'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Skylake-Server'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Skylake-Server-IBRS'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Skylake-Server-v1'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Skylake-Server-v2'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='hle'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='rtm'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Skylake-Server-v3'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Skylake-Server-v4'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Skylake-Server-v5'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512bw'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512cd'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512dq'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512f'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='avx512vl'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='invpcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pcid'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='pku'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Snowridge'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='cldemote'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='core-capability'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='gfni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='movdir64b'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='movdiri'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='mpx'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='split-lock-detect'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Snowridge-v1'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='cldemote'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='core-capability'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='gfni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='movdir64b'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='movdiri'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='mpx'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='split-lock-detect'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Snowridge-v2'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='cldemote'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='core-capability'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='gfni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='movdir64b'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='movdiri'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='split-lock-detect'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Snowridge-v3'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='cldemote'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='core-capability'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='gfni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='movdir64b'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='movdiri'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='split-lock-detect'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='Snowridge-v4'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='cldemote'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='erms'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='gfni'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='movdir64b'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='movdiri'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='xsaves'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='athlon'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='3dnow'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='3dnowext'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='athlon-v1'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='3dnow'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='3dnowext'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='core2duo'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ss'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='core2duo-v1'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ss'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='coreduo'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ss'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='coreduo-v1'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ss'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='n270'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ss'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='n270-v1'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='ss'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='phenom'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='3dnow'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='3dnowext'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <blockers model='phenom-v1'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='3dnow'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <feature name='3dnowext'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </blockers>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    </mode>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:  </cpu>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:  <memoryBacking supported='yes'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <enum name='sourceType'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <value>file</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <value>anonymous</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <value>memfd</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    </enum>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:  </memoryBacking>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:  <devices>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <disk supported='yes'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <enum name='diskDevice'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>disk</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>cdrom</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>floppy</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>lun</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </enum>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <enum name='bus'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>ide</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>fdc</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>scsi</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>virtio</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>usb</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>sata</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </enum>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <enum name='model'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>virtio</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>virtio-transitional</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>virtio-non-transitional</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </enum>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    </disk>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <graphics supported='yes'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <enum name='type'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>vnc</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>egl-headless</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>dbus</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </enum>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    </graphics>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <video supported='yes'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <enum name='modelType'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>vga</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>cirrus</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>virtio</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>none</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>bochs</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>ramfb</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </enum>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    </video>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <hostdev supported='yes'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <enum name='mode'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>subsystem</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </enum>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <enum name='startupPolicy'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>default</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>mandatory</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>requisite</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>optional</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </enum>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <enum name='subsysType'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>usb</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>pci</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>scsi</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </enum>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <enum name='capsType'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <enum name='pciBackend'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    </hostdev>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <rng supported='yes'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <enum name='model'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>virtio</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>virtio-transitional</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>virtio-non-transitional</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </enum>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <enum name='backendModel'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>random</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>egd</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>builtin</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </enum>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    </rng>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <filesystem supported='yes'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <enum name='driverType'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>path</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>handle</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>virtiofs</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </enum>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    </filesystem>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <tpm supported='yes'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <enum name='model'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>tpm-tis</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>tpm-crb</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </enum>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <enum name='backendModel'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>emulator</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>external</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </enum>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <enum name='backendVersion'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>2.0</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </enum>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    </tpm>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <redirdev supported='yes'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <enum name='bus'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>usb</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </enum>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    </redirdev>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <channel supported='yes'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <enum name='type'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>pty</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>unix</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </enum>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    </channel>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <crypto supported='yes'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <enum name='model'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <enum name='type'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>qemu</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </enum>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <enum name='backendModel'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>builtin</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </enum>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    </crypto>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <interface supported='yes'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <enum name='backendType'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>default</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>passt</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </enum>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    </interface>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <panic supported='yes'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <enum name='model'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>isa</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>hyperv</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </enum>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    </panic>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <console supported='yes'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <enum name='type'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>null</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>vc</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>pty</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>dev</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>file</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>pipe</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>stdio</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>udp</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>tcp</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>unix</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>qemu-vdagent</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>dbus</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </enum>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    </console>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:  </devices>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:  <features>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <gic supported='no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <vmcoreinfo supported='yes'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <genid supported='yes'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <backingStoreInput supported='yes'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <backup supported='yes'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <async-teardown supported='yes'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <ps2 supported='yes'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <sev supported='no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <sgx supported='no'/>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <hyperv supported='yes'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <enum name='features'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>relaxed</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>vapic</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>spinlocks</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>vpindex</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>runtime</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>synic</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>stimer</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>reset</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>vendor_id</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>frequencies</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>reenlightenment</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>tlbflush</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>ipi</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>avic</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>emsr_bitmap</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>xmm_input</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </enum>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <defaults>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <spinlocks>4095</spinlocks>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <stimer_direct>on</stimer_direct>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <tlbflush_direct>on</tlbflush_direct>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <tlbflush_extended>on</tlbflush_extended>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <vendor_id>Linux KVM Hv</vendor_id>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </defaults>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    </hyperv>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    <launchSecurity supported='yes'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      <enum name='sectype'>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:        <value>tdx</value>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:      </enum>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:    </launchSecurity>
Dec  1 04:07:10 np0005540697 nova_compute[189491]:  </features>
Dec  1 04:07:10 np0005540697 nova_compute[189491]: </domainCapabilities>
Dec  1 04:07:10 np0005540697 nova_compute[189491]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.721 189495 DEBUG nova.virt.libvirt.host [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.721 189495 INFO nova.virt.libvirt.host [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] Secure Boot support detected#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.723 189495 INFO nova.virt.libvirt.driver [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.724 189495 INFO nova.virt.libvirt.driver [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.731 189495 DEBUG nova.virt.libvirt.driver [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.868 189495 INFO nova.virt.node [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] Determined node identity 143c7fe7-af1f-477a-978c-6a994d785d98 from /var/lib/nova/compute_id#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.890 189495 WARNING nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] Compute nodes ['143c7fe7-af1f-477a-978c-6a994d785d98'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.937 189495 INFO nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.977 189495 WARNING nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.978 189495 DEBUG oslo_concurrency.lockutils [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.979 189495 DEBUG oslo_concurrency.lockutils [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.980 189495 DEBUG oslo_concurrency.lockutils [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 04:07:10 np0005540697 nova_compute[189491]: 2025-12-01 09:07:10.980 189495 DEBUG nova.compute.resource_tracker [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 04:07:11 np0005540697 systemd[1]: Starting libvirt nodedev daemon...
Dec  1 04:07:11 np0005540697 systemd[1]: Started libvirt nodedev daemon.
Dec  1 04:07:11 np0005540697 nova_compute[189491]: 2025-12-01 09:07:11.354 189495 WARNING nova.virt.libvirt.driver [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 04:07:11 np0005540697 nova_compute[189491]: 2025-12-01 09:07:11.355 189495 DEBUG nova.compute.resource_tracker [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6021MB free_disk=72.61235046386719GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 04:07:11 np0005540697 nova_compute[189491]: 2025-12-01 09:07:11.355 189495 DEBUG oslo_concurrency.lockutils [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 04:07:11 np0005540697 nova_compute[189491]: 2025-12-01 09:07:11.355 189495 DEBUG oslo_concurrency.lockutils [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 04:07:11 np0005540697 nova_compute[189491]: 2025-12-01 09:07:11.383 189495 WARNING nova.compute.resource_tracker [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] No compute node record for compute-0.ctlplane.example.com:143c7fe7-af1f-477a-978c-6a994d785d98: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host 143c7fe7-af1f-477a-978c-6a994d785d98 could not be found.#033[00m
Dec  1 04:07:11 np0005540697 nova_compute[189491]: 2025-12-01 09:07:11.408 189495 INFO nova.compute.resource_tracker [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: 143c7fe7-af1f-477a-978c-6a994d785d98#033[00m
Dec  1 04:07:11 np0005540697 nova_compute[189491]: 2025-12-01 09:07:11.487 189495 DEBUG nova.compute.resource_tracker [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 04:07:11 np0005540697 nova_compute[189491]: 2025-12-01 09:07:11.488 189495 DEBUG nova.compute.resource_tracker [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 04:07:12 np0005540697 nova_compute[189491]: 2025-12-01 09:07:12.429 189495 INFO nova.scheduler.client.report [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [req-49609163-6962-493e-9c83-fe6b28a009c0] Created resource provider record via placement API for resource provider with UUID 143c7fe7-af1f-477a-978c-6a994d785d98 and name compute-0.ctlplane.example.com.#033[00m
Dec  1 04:07:12 np0005540697 nova_compute[189491]: 2025-12-01 09:07:12.845 189495 DEBUG nova.virt.libvirt.host [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Dec  1 04:07:12 np0005540697 nova_compute[189491]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803#033[00m
Dec  1 04:07:12 np0005540697 nova_compute[189491]: 2025-12-01 09:07:12.846 189495 INFO nova.virt.libvirt.host [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] kernel doesn't support AMD SEV#033[00m
Dec  1 04:07:12 np0005540697 nova_compute[189491]: 2025-12-01 09:07:12.846 189495 DEBUG nova.compute.provider_tree [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] Updating inventory in ProviderTree for provider 143c7fe7-af1f-477a-978c-6a994d785d98 with inventory: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  1 04:07:12 np0005540697 nova_compute[189491]: 2025-12-01 09:07:12.847 189495 DEBUG nova.virt.libvirt.driver [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  1 04:07:12 np0005540697 nova_compute[189491]: 2025-12-01 09:07:12.897 189495 DEBUG nova.scheduler.client.report [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] Updated inventory for provider 143c7fe7-af1f-477a-978c-6a994d785d98 with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m
Dec  1 04:07:12 np0005540697 nova_compute[189491]: 2025-12-01 09:07:12.898 189495 DEBUG nova.compute.provider_tree [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] Updating resource provider 143c7fe7-af1f-477a-978c-6a994d785d98 generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Dec  1 04:07:12 np0005540697 nova_compute[189491]: 2025-12-01 09:07:12.899 189495 DEBUG nova.compute.provider_tree [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] Updating inventory in ProviderTree for provider 143c7fe7-af1f-477a-978c-6a994d785d98 with inventory: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  1 04:07:13 np0005540697 nova_compute[189491]: 2025-12-01 09:07:13.004 189495 DEBUG nova.compute.provider_tree [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] Updating resource provider 143c7fe7-af1f-477a-978c-6a994d785d98 generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Dec  1 04:07:13 np0005540697 nova_compute[189491]: 2025-12-01 09:07:13.029 189495 DEBUG nova.compute.resource_tracker [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 04:07:13 np0005540697 nova_compute[189491]: 2025-12-01 09:07:13.029 189495 DEBUG oslo_concurrency.lockutils [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.674s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 04:07:13 np0005540697 nova_compute[189491]: 2025-12-01 09:07:13.030 189495 DEBUG nova.service [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182#033[00m
Dec  1 04:07:13 np0005540697 nova_compute[189491]: 2025-12-01 09:07:13.107 189495 DEBUG nova.service [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199#033[00m
Dec  1 04:07:13 np0005540697 nova_compute[189491]: 2025-12-01 09:07:13.107 189495 DEBUG nova.servicegroup.drivers.db [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44#033[00m
Dec  1 04:07:15 np0005540697 systemd-logind[792]: New session 25 of user zuul.
Dec  1 04:07:15 np0005540697 systemd[1]: Started Session 25 of User zuul.
Dec  1 04:07:16 np0005540697 python3.9[189969]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 04:07:18 np0005540697 python3.9[190127]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  1 04:07:18 np0005540697 systemd[1]: Reloading.
Dec  1 04:07:18 np0005540697 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 04:07:18 np0005540697 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 04:07:19 np0005540697 python3.9[190312]: ansible-ansible.builtin.service_facts Invoked
Dec  1 04:07:19 np0005540697 network[190329]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  1 04:07:19 np0005540697 network[190330]: 'network-scripts' will be removed from distribution in near future.
Dec  1 04:07:19 np0005540697 network[190331]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  1 04:07:24 np0005540697 python3.9[190605]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 04:07:25 np0005540697 python3.9[190758]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:07:25 np0005540697 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  1 04:07:25 np0005540697 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  1 04:07:25 np0005540697 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  1 04:07:26 np0005540697 python3.9[190911]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:07:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 09:07:26.476 106659 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 04:07:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 09:07:26.478 106659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 04:07:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 09:07:26.478 106659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 04:07:27 np0005540697 python3.9[191063]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:07:28 np0005540697 python3.9[191215]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec  1 04:07:29 np0005540697 python3.9[191367]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  1 04:07:29 np0005540697 systemd[1]: Reloading.
Dec  1 04:07:29 np0005540697 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 04:07:29 np0005540697 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 04:07:30 np0005540697 python3.9[191554]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:07:30 np0005540697 python3.9[191707]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/openstack/config/telemetry recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 04:07:31 np0005540697 python3.9[191857]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 04:07:32 np0005540697 python3.9[192009]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:07:32 np0005540697 podman[192057]: 2025-12-01 09:07:32.680677977 +0000 UTC m=+0.060757524 container health_status f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  1 04:07:32 np0005540697 python3.9[192149]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764580051.8621788-133-57044984692393/.source.conf follow=False _original_basename=ceilometer-host-specific.conf.j2 checksum=e86e0e43000ce9ccfe5aefbf8e8f2e3d15d05584 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  1 04:07:33 np0005540697 python3.9[192301]: ansible-ansible.builtin.group Invoked with name=libvirt state=present force=False system=False local=False non_unique=False gid=None gid_min=None gid_max=None
Dec  1 04:07:34 np0005540697 podman[192453]: 2025-12-01 09:07:34.786943958 +0000 UTC m=+0.105830258 container health_status 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_controller, tcib_managed=true, container_name=ovn_controller)
Dec  1 04:07:34 np0005540697 python3.9[192454]: ansible-ansible.builtin.getent Invoked with database=passwd key=ceilometer fail_key=True service=None split=None
Dec  1 04:07:34 np0005540697 podman[192479]: 2025-12-01 09:07:34.894712872 +0000 UTC m=+0.076165224 container health_status 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  1 04:07:35 np0005540697 python3.9[192651]: ansible-ansible.builtin.group Invoked with gid=42405 name=ceilometer state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec  1 04:07:36 np0005540697 python3.9[192809]: ansible-ansible.builtin.user Invoked with comment=ceilometer user group=ceilometer groups=['libvirt'] name=ceilometer shell=/sbin/nologin state=present uid=42405 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec  1 04:07:37 np0005540697 python3.9[192967]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:07:38 np0005540697 python3.9[193088]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764580057.5434082-201-269284765487224/.source.conf _original_basename=ceilometer.conf follow=False checksum=f74f01c63e6cdeca5458ef9aff2a1db5d6a4e4b9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:07:39 np0005540697 python3.9[193238]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/polling.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:07:39 np0005540697 python3.9[193359]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/polling.yaml mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764580058.7419033-201-127948682330603/.source.yaml _original_basename=polling.yaml follow=False checksum=6c8680a286285f2e0ef9fa528ca754765e5ed0e5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:07:40 np0005540697 python3.9[193509]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/custom.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:07:41 np0005540697 python3.9[193630]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/custom.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764580059.994161-201-237061810771917/.source.conf _original_basename=custom.conf follow=False checksum=838b8b0a7d7f72e55ab67d39f32e3cb3eca2139b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:07:41 np0005540697 python3.9[193780]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.crt follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 04:07:42 np0005540697 python3.9[193932]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.key follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 04:07:43 np0005540697 python3.9[194084]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:07:43 np0005540697 python3.9[194205]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764580062.7981274-260-41547324616950/.source.json follow=False _original_basename=ceilometer-agent-compute.json.j2 checksum=264d11e8d3809e7ef745878dce7edd46098e25b2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:07:44 np0005540697 python3.9[194355]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:07:44 np0005540697 python3.9[194431]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf _original_basename=ceilometer-host-specific.conf.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:07:45 np0005540697 nova_compute[189491]: 2025-12-01 09:07:45.109 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 04:07:45 np0005540697 nova_compute[189491]: 2025-12-01 09:07:45.220 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 04:07:45 np0005540697 python3.9[194581]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_agent_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:07:46 np0005540697 python3.9[194702]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer_agent_compute.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764580065.0972366-260-130579400475857/.source.json follow=False _original_basename=ceilometer_agent_compute.json.j2 checksum=4096a0f5410f47dcaf8ab19e56a9d8e211effecd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:07:46 np0005540697 python3.9[194852]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:07:47 np0005540697 python3.9[194973]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764580066.3333185-260-17312254881028/.source.yaml follow=False _original_basename=ceilometer_prom_exporter.yaml.j2 checksum=10157c879411ee6023e506dc85a343cedc52700f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:07:47 np0005540697 python3.9[195123]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/firewall.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:07:49 np0005540697 python3.9[195244]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/firewall.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764580067.4371815-260-174471234001386/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=d942d984493b214bda2913f753ff68cdcedff00e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:07:50 np0005540697 python3.9[195394]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:07:50 np0005540697 python3.9[195515]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/node_exporter.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764580069.8344142-260-139860428522479/.source.json follow=False _original_basename=node_exporter.json.j2 checksum=6e4982940d2bfae88404914dfaf72552f6356d81 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:07:51 np0005540697 python3.9[195665]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:07:52 np0005540697 python3.9[195786]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/node_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764580071.0375187-260-29145134950437/.source.yaml follow=False _original_basename=node_exporter.yaml.j2 checksum=81d906d3e1e8c4f8367276f5d3a67b80ca7e989e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:07:52 np0005540697 python3.9[195936]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/openstack_network_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:07:53 np0005540697 python3.9[196057]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/openstack_network_exporter.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764580072.337705-260-211422917729930/.source.json follow=False _original_basename=openstack_network_exporter.json.j2 checksum=d474f1e4c3dbd24762592c51cbe5311f0a037273 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:07:54 np0005540697 python3.9[196207]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:07:54 np0005540697 python3.9[196328]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764580073.6044981-260-280021737172938/.source.yaml follow=False _original_basename=openstack_network_exporter.yaml.j2 checksum=2b6bd0891e609bf38a73282f42888052b750bed6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:07:55 np0005540697 python3.9[196478]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:07:56 np0005540697 python3.9[196599]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/podman_exporter.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764580074.9933858-260-125338917106540/.source.json follow=False _original_basename=podman_exporter.json.j2 checksum=e342121a88f67e2bae7ebc05d1e6d350470198a5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:07:56 np0005540697 python3.9[196749]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:07:57 np0005540697 python3.9[196870]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/podman_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764580076.1940234-260-84453393362739/.source.yaml follow=False _original_basename=podman_exporter.yaml.j2 checksum=7ccb5eca2ff1dc337c3f3ecbbff5245af7149c47 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:07:58 np0005540697 python3.9[197020]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:07:58 np0005540697 python3.9[197096]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry/node_exporter.yaml _original_basename=node_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/node_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:07:59 np0005540697 python3.9[197246]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:07:59 np0005540697 python3.9[197322]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry/podman_exporter.yaml _original_basename=podman_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/podman_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:08:00 np0005540697 python3.9[197472]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:08:01 np0005540697 python3.9[197548]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml _original_basename=ceilometer_prom_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:08:01 np0005540697 python3.9[197700]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry/default/tls.crt recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:08:02 np0005540697 python3.9[197852]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry/default/tls.key recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:08:02 np0005540697 podman[197853]: 2025-12-01 09:08:02.904626013 +0000 UTC m=+0.059134333 container health_status f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  1 04:08:03 np0005540697 python3.9[198023]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 04:08:04 np0005540697 python3.9[198175]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=podman.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 04:08:04 np0005540697 systemd[1]: Reloading.
Dec  1 04:08:04 np0005540697 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 04:08:04 np0005540697 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 04:08:04 np0005540697 systemd[1]: Listening on Podman API Socket.
Dec  1 04:08:05 np0005540697 podman[198338]: 2025-12-01 09:08:05.616517607 +0000 UTC m=+0.083092785 container health_status 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, container_name=multipathd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec  1 04:08:05 np0005540697 podman[198339]: 2025-12-01 09:08:05.666983634 +0000 UTC m=+0.134473615 container health_status 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 04:08:05 np0005540697 python3.9[198403]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:08:06 np0005540697 python3.9[198534]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764580085.1899636-482-245168747852444/.source _original_basename=healthcheck follow=False checksum=ebb343c21fce35a02591a9351660cb7035a47d42 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  1 04:08:06 np0005540697 python3.9[198610]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/healthcheck.future follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:08:07 np0005540697 python3.9[198733]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764580085.1899636-482-245168747852444/.source.future _original_basename=healthcheck.future follow=False checksum=d500a98192f4ddd70b4dfdc059e2d81aed36a294 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  1 04:08:08 np0005540697 python3.9[198885]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=ceilometer_agent_compute.json debug=False
Dec  1 04:08:09 np0005540697 python3.9[199037]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  1 04:08:09 np0005540697 nova_compute[189491]: 2025-12-01 09:08:09.715 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 04:08:09 np0005540697 nova_compute[189491]: 2025-12-01 09:08:09.716 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 04:08:09 np0005540697 nova_compute[189491]: 2025-12-01 09:08:09.716 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 04:08:09 np0005540697 nova_compute[189491]: 2025-12-01 09:08:09.717 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 04:08:10 np0005540697 nova_compute[189491]: 2025-12-01 09:08:10.024 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  1 04:08:10 np0005540697 nova_compute[189491]: 2025-12-01 09:08:10.025 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 04:08:10 np0005540697 nova_compute[189491]: 2025-12-01 09:08:10.025 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 04:08:10 np0005540697 nova_compute[189491]: 2025-12-01 09:08:10.026 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 04:08:10 np0005540697 nova_compute[189491]: 2025-12-01 09:08:10.026 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 04:08:10 np0005540697 nova_compute[189491]: 2025-12-01 09:08:10.027 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 04:08:10 np0005540697 nova_compute[189491]: 2025-12-01 09:08:10.027 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 04:08:10 np0005540697 nova_compute[189491]: 2025-12-01 09:08:10.028 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 04:08:10 np0005540697 nova_compute[189491]: 2025-12-01 09:08:10.028 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 04:08:10 np0005540697 nova_compute[189491]: 2025-12-01 09:08:10.453 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 04:08:10 np0005540697 nova_compute[189491]: 2025-12-01 09:08:10.454 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 04:08:10 np0005540697 nova_compute[189491]: 2025-12-01 09:08:10.455 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 04:08:10 np0005540697 nova_compute[189491]: 2025-12-01 09:08:10.455 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 04:08:10 np0005540697 nova_compute[189491]: 2025-12-01 09:08:10.701 189495 WARNING nova.virt.libvirt.driver [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 04:08:10 np0005540697 nova_compute[189491]: 2025-12-01 09:08:10.703 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6036MB free_disk=72.61262893676758GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 04:08:10 np0005540697 nova_compute[189491]: 2025-12-01 09:08:10.703 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 04:08:10 np0005540697 nova_compute[189491]: 2025-12-01 09:08:10.704 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 04:08:10 np0005540697 python3[199191]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=ceilometer_agent_compute.json log_base_path=/var/log/containers/stdouts debug=False
Dec  1 04:08:10 np0005540697 nova_compute[189491]: 2025-12-01 09:08:10.991 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 04:08:10 np0005540697 nova_compute[189491]: 2025-12-01 09:08:10.992 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 04:08:11 np0005540697 nova_compute[189491]: 2025-12-01 09:08:11.034 189495 DEBUG nova.compute.provider_tree [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Inventory has not changed in ProviderTree for provider: 143c7fe7-af1f-477a-978c-6a994d785d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 04:08:11 np0005540697 nova_compute[189491]: 2025-12-01 09:08:11.054 189495 DEBUG nova.scheduler.client.report [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Inventory has not changed for provider 143c7fe7-af1f-477a-978c-6a994d785d98 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 04:08:11 np0005540697 nova_compute[189491]: 2025-12-01 09:08:11.055 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 04:08:11 np0005540697 nova_compute[189491]: 2025-12-01 09:08:11.055 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.352s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 04:08:11 np0005540697 podman[199227]: 2025-12-01 09:08:11.13326454 +0000 UTC m=+0.051733340 container create ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.build-date=20251125)
Dec  1 04:08:11 np0005540697 podman[199227]: 2025-12-01 09:08:11.103905704 +0000 UTC m=+0.022374484 image pull b1b6d71b432c07886b3bae74df4dc9841d1f26407d5f96d6c1e400b0154d9a3d quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested
Dec  1 04:08:11 np0005540697 python3[199191]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ceilometer_agent_compute --conmon-pidfile /run/ceilometer_agent_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env OS_ENDPOINT_TYPE=internal --healthcheck-command /openstack/healthcheck compute --label config_id=edpm --label container_name=ceilometer_agent_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']} --log-driver journald --log-level info --network host --security-opt label:type:ceilometer_polling_t --user ceilometer --volume /var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z --volume /var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z --volume /run/libvirt:/run/libvirt:shared,ro --volume /etc/hosts:/etc/hosts:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z --volume /dev/log:/dev/log --volume /var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested kolla_start
Dec  1 04:08:11 np0005540697 python3.9[199417]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 04:08:12 np0005540697 python3.9[199571]: ansible-file Invoked with path=/etc/systemd/system/edpm_ceilometer_agent_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:08:13 np0005540697 python3.9[199722]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764580093.020483-546-16151772445730/source dest=/etc/systemd/system/edpm_ceilometer_agent_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:08:14 np0005540697 python3.9[199801]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  1 04:08:14 np0005540697 systemd[1]: Reloading.
Dec  1 04:08:14 np0005540697 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 04:08:14 np0005540697 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 04:08:15 np0005540697 python3.9[199912]: ansible-systemd Invoked with state=restarted name=edpm_ceilometer_agent_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 04:08:15 np0005540697 systemd[1]: Reloading.
Dec  1 04:08:15 np0005540697 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 04:08:15 np0005540697 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 04:08:16 np0005540697 systemd[1]: Starting ceilometer_agent_compute container...
Dec  1 04:08:16 np0005540697 systemd[1]: Started libcrun container.
Dec  1 04:08:16 np0005540697 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f52486758ce9c0c7c13500c17d2f639f73cebdca449ce28f3a6f63e59f5b803/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec  1 04:08:16 np0005540697 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f52486758ce9c0c7c13500c17d2f639f73cebdca449ce28f3a6f63e59f5b803/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Dec  1 04:08:16 np0005540697 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f52486758ce9c0c7c13500c17d2f639f73cebdca449ce28f3a6f63e59f5b803/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff)
Dec  1 04:08:16 np0005540697 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f52486758ce9c0c7c13500c17d2f639f73cebdca449ce28f3a6f63e59f5b803/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Dec  1 04:08:16 np0005540697 systemd[1]: Started /usr/bin/podman healthcheck run ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c.
Dec  1 04:08:16 np0005540697 podman[199952]: 2025-12-01 09:08:16.581844019 +0000 UTC m=+0.424347812 container init ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, managed_by=edpm_ansible)
Dec  1 04:08:16 np0005540697 ceilometer_agent_compute[199967]: + sudo -E kolla_set_configs
Dec  1 04:08:16 np0005540697 ceilometer_agent_compute[199967]: sudo: unable to send audit message: Operation not permitted
Dec  1 04:08:16 np0005540697 podman[199952]: 2025-12-01 09:08:16.653896971 +0000 UTC m=+0.496400724 container start ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team)
Dec  1 04:08:16 np0005540697 ceilometer_agent_compute[199967]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec  1 04:08:16 np0005540697 ceilometer_agent_compute[199967]: INFO:__main__:Validating config file
Dec  1 04:08:16 np0005540697 ceilometer_agent_compute[199967]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec  1 04:08:16 np0005540697 ceilometer_agent_compute[199967]: INFO:__main__:Copying service configuration files
Dec  1 04:08:16 np0005540697 ceilometer_agent_compute[199967]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Dec  1 04:08:16 np0005540697 ceilometer_agent_compute[199967]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Dec  1 04:08:16 np0005540697 ceilometer_agent_compute[199967]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Dec  1 04:08:16 np0005540697 ceilometer_agent_compute[199967]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Dec  1 04:08:16 np0005540697 ceilometer_agent_compute[199967]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml
Dec  1 04:08:16 np0005540697 ceilometer_agent_compute[199967]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Dec  1 04:08:16 np0005540697 ceilometer_agent_compute[199967]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec  1 04:08:16 np0005540697 ceilometer_agent_compute[199967]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec  1 04:08:16 np0005540697 ceilometer_agent_compute[199967]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec  1 04:08:16 np0005540697 ceilometer_agent_compute[199967]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec  1 04:08:16 np0005540697 ceilometer_agent_compute[199967]: INFO:__main__:Writing out command to execute
Dec  1 04:08:16 np0005540697 ceilometer_agent_compute[199967]: ++ cat /run_command
Dec  1 04:08:16 np0005540697 ceilometer_agent_compute[199967]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Dec  1 04:08:16 np0005540697 ceilometer_agent_compute[199967]: + ARGS=
Dec  1 04:08:16 np0005540697 ceilometer_agent_compute[199967]: + sudo kolla_copy_cacerts
Dec  1 04:08:16 np0005540697 ceilometer_agent_compute[199967]: sudo: unable to send audit message: Operation not permitted
Dec  1 04:08:16 np0005540697 podman[199952]: ceilometer_agent_compute
Dec  1 04:08:16 np0005540697 ceilometer_agent_compute[199967]: + [[ ! -n '' ]]
Dec  1 04:08:16 np0005540697 ceilometer_agent_compute[199967]: + . kolla_extend_start
Dec  1 04:08:16 np0005540697 ceilometer_agent_compute[199967]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Dec  1 04:08:16 np0005540697 ceilometer_agent_compute[199967]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'\'''
Dec  1 04:08:16 np0005540697 ceilometer_agent_compute[199967]: + umask 0022
Dec  1 04:08:16 np0005540697 ceilometer_agent_compute[199967]: + exec /usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout
Dec  1 04:08:16 np0005540697 systemd[1]: Started ceilometer_agent_compute container.
Dec  1 04:08:16 np0005540697 podman[199974]: 2025-12-01 09:08:16.769869088 +0000 UTC m=+0.105768106 container health_status ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=starting, health_failing_streak=1, health_log=, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Dec  1 04:08:16 np0005540697 systemd[1]: ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c-5520e5a1e945fca9.service: Main process exited, code=exited, status=1/FAILURE
Dec  1 04:08:16 np0005540697 systemd[1]: ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c-5520e5a1e945fca9.service: Failed with result 'exit-code'.
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.550 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:45
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.550 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.550 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.550 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.551 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.551 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.551 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.551 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.551 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.551 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.551 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.551 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.551 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.551 2 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.551 2 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.552 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.552 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.552 2 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.552 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.552 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.552 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.552 2 WARNING oslo_config.cfg [-] Deprecated: Option "tenant_name_discovery" from group "DEFAULT" is deprecated. Use option "identity_name_discovery" from group "DEFAULT".
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.552 2 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.553 2 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.553 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.553 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.553 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.553 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.553 2 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.553 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.553 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.553 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.553 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.553 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.553 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.554 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.554 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.554 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.554 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.554 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.554 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.554 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.554 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.554 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.554 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.554 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.554 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.555 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.555 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.555 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.555 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.555 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.555 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.555 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.555 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.555 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.555 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.555 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.555 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.555 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.556 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.556 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.556 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.556 2 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.556 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.556 2 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.556 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.556 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.556 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.556 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.556 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.557 2 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.557 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.557 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.557 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.557 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.557 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.557 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.557 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.557 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.557 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.558 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.558 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.558 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.558 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.558 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.558 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.558 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.558 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.558 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.558 2 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.558 2 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.559 2 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.559 2 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.559 2 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.559 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.559 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.559 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.559 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.559 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.559 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.559 2 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.559 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.560 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.560 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.560 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.560 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.560 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.560 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.560 2 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.560 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.560 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.560 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.560 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.560 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.561 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.561 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.561 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.561 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.561 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.561 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.561 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.561 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.561 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.561 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.561 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.561 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.562 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.562 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.562 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.562 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.562 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.562 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.562 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.562 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.562 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.562 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.563 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.563 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.563 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.563 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.563 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.563 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.563 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.563 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.563 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.563 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.563 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.563 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.564 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.564 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.564 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.564 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.587 12 INFO ceilometer.polling.manager [-] Starting heartbeat child service. Listening on /var/lib/ceilometer/ceilometer-compute.socket
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.588 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:53
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.588 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.588 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.588 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.589 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.589 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.589 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.589 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.589 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.589 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.590 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.590 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.590 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.590 12 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.590 12 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.590 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.590 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.590 12 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.591 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.591 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.591 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.591 12 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.591 12 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.591 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.591 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.591 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.592 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.592 12 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.592 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.592 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.592 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.592 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.592 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.593 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.593 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.593 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.593 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.593 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.593 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.594 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.594 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.594 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.594 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.594 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.594 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.594 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.595 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.595 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.595 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.595 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.595 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.595 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.595 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.595 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.595 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.596 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.596 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.596 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.596 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.596 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.596 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.596 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.596 12 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.597 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.597 12 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.597 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.597 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.597 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.597 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.597 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.597 12 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.598 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.598 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.598 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.598 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.598 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.598 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.598 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.599 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.599 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.599 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.599 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.599 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.599 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.599 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.599 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.600 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.600 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.600 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.600 12 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.600 12 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.600 12 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.600 12 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 python3.9[200149]: ansible-ansible.builtin.systemd Invoked with name=edpm_ceilometer_agent_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.600 12 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.601 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.601 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.601 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.601 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.601 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.601 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.601 12 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.601 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.601 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.602 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.602 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.602 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.602 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.602 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.602 12 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.602 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.602 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.603 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.603 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.603 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.603 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.603 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.603 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.603 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.603 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.604 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.604 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.604 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.604 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.604 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.604 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.604 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.604 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.605 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.605 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.605 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.605 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.605 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.605 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.605 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.605 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.606 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.606 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.606 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.606 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.606 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.606 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.606 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.607 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.607 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.607 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.607 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.607 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.607 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.607 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.608 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.608 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.608 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.608 12 DEBUG cotyledon._service [-] Run service AgentHeartBeatManager(0) [12] wait_forever /usr/lib/python3.12/site-packages/cotyledon/_service.py:263
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.610 12 DEBUG ceilometer.polling.manager [-] Started heartbeat child process. run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:519
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.612 12 DEBUG ceilometer.polling.manager [-] Started heartbeat update thread _read_queue /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:522
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.613 12 DEBUG ceilometer.polling.manager [-] Started heartbeat reporting thread _report_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:527
Dec  1 04:08:17 np0005540697 systemd[1]: Stopping ceilometer_agent_compute container...
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.713 2 INFO cotyledon._service_manager [-] Caught SIGTERM signal, graceful exiting of master process
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.787 14 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/utils.py:96
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.797 14 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.797 14 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.797 14 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.814 2 DEBUG cotyledon._service_manager [-] Killing services with signal SIGTERM _shutdown /usr/lib/python3.12/site-packages/cotyledon/_service_manager.py:319
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.815 2 DEBUG cotyledon._service_manager [-] Waiting services to terminate _shutdown /usr/lib/python3.12/site-packages/cotyledon/_service_manager.py:323
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.815 12 INFO cotyledon._service [-] Caught SIGTERM signal, graceful exiting of service AgentHeartBeatManager(0) [12]
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.914 14 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:53
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.914 14 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.914 14 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.914 14 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.915 14 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.915 14 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.915 14 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.915 14 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.915 14 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.915 14 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.915 14 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.915 14 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.915 14 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.916 14 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.916 14 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.916 14 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.916 14 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.916 14 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.916 14 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.916 14 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.916 14 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.916 14 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.916 14 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.916 14 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.917 14 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.917 14 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.917 14 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.917 14 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.917 14 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.917 14 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.917 14 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.917 14 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.917 14 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.917 14 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.917 14 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.917 14 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.918 14 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.918 14 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.918 14 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.918 14 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.918 14 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.918 14 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.918 14 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.918 14 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.918 14 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.918 14 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.918 14 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.918 14 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.919 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.919 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.919 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.919 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.919 14 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.919 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.919 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.919 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.919 14 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.919 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.919 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.919 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.920 14 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.920 14 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.920 14 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.920 14 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.920 14 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.920 14 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.920 14 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.920 14 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.920 14 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.920 14 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.920 14 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.920 14 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.921 14 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.921 14 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.921 14 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.921 14 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.921 14 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.921 14 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.921 14 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.921 14 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.921 14 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.921 14 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.921 14 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.921 14 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.922 14 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.922 14 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.922 14 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.922 14 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.922 14 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.922 14 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.922 14 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.922 14 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.922 14 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.922 14 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.922 14 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.922 14 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.922 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.923 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.923 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.923 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.923 14 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.923 14 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.923 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.923 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.923 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.923 14 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.923 14 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.923 14 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.923 14 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.923 14 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.924 14 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.924 14 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.924 14 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.924 14 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.924 14 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.924 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.924 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.924 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_url   = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.924 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.924 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.924 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.924 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.924 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.924 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_id  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.925 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.925 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.925 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.925 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.925 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.password   = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.925 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.925 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_name = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.925 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.925 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_name = service log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.925 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.925 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.925 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.system_scope = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.925 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.925 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.trust_id   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.925 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.925 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_name = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.925 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_id    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.925 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.username   = ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.926 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.926 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.926 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.926 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.926 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.926 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.926 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.926 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.926 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.926 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.926 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.926 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.927 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.927 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.927 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.927 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.927 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.927 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.927 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.927 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.927 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.927 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.927 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.927 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.927 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.927 14 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.928 14 DEBUG cotyledon._service [-] Run service AgentManager(0) [14] wait_forever /usr/lib/python3.12/site-packages/cotyledon/_service.py:263
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.928 14 INFO cotyledon._service [-] Caught SIGTERM signal, graceful exiting of service AgentManager(0) [14]
Dec  1 04:08:17 np0005540697 ceilometer_agent_compute[199967]: 2025-12-01 09:08:17.940 2 DEBUG cotyledon._service_manager [-] Shutdown finish _shutdown /usr/lib/python3.12/site-packages/cotyledon/_service_manager.py:335
Dec  1 04:08:17 np0005540697 virtqemud[189211]: End of file while reading data: Input/output error
Dec  1 04:08:18 np0005540697 systemd[1]: libpod-ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c.scope: Deactivated successfully.
Dec  1 04:08:18 np0005540697 conmon[199967]: conmon ac40fb0e07b42b30d585 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c.scope/container/memory.events
Dec  1 04:08:18 np0005540697 systemd[1]: libpod-ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c.scope: Consumed 1.534s CPU time.
Dec  1 04:08:18 np0005540697 podman[200161]: 2025-12-01 09:08:18.112229263 +0000 UTC m=+0.449552545 container died ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, managed_by=edpm_ansible, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm)
Dec  1 04:08:18 np0005540697 systemd[1]: ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c-5520e5a1e945fca9.timer: Deactivated successfully.
Dec  1 04:08:18 np0005540697 systemd[1]: Stopped /usr/bin/podman healthcheck run ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c.
Dec  1 04:08:18 np0005540697 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c-userdata-shm.mount: Deactivated successfully.
Dec  1 04:08:18 np0005540697 systemd[1]: var-lib-containers-storage-overlay-8f52486758ce9c0c7c13500c17d2f639f73cebdca449ce28f3a6f63e59f5b803-merged.mount: Deactivated successfully.
Dec  1 04:08:18 np0005540697 podman[200161]: 2025-12-01 09:08:18.209813156 +0000 UTC m=+0.547136438 container cleanup ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, config_id=edpm)
Dec  1 04:08:18 np0005540697 podman[200161]: ceilometer_agent_compute
Dec  1 04:08:18 np0005540697 podman[200193]: ceilometer_agent_compute
Dec  1 04:08:18 np0005540697 systemd[1]: edpm_ceilometer_agent_compute.service: Deactivated successfully.
Dec  1 04:08:18 np0005540697 systemd[1]: Stopped ceilometer_agent_compute container.
Dec  1 04:08:18 np0005540697 systemd[1]: Starting ceilometer_agent_compute container...
Dec  1 04:08:18 np0005540697 systemd[1]: Started libcrun container.
Dec  1 04:08:18 np0005540697 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f52486758ce9c0c7c13500c17d2f639f73cebdca449ce28f3a6f63e59f5b803/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec  1 04:08:18 np0005540697 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f52486758ce9c0c7c13500c17d2f639f73cebdca449ce28f3a6f63e59f5b803/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Dec  1 04:08:18 np0005540697 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f52486758ce9c0c7c13500c17d2f639f73cebdca449ce28f3a6f63e59f5b803/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff)
Dec  1 04:08:18 np0005540697 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f52486758ce9c0c7c13500c17d2f639f73cebdca449ce28f3a6f63e59f5b803/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Dec  1 04:08:18 np0005540697 systemd[1]: Started /usr/bin/podman healthcheck run ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c.
Dec  1 04:08:18 np0005540697 podman[200206]: 2025-12-01 09:08:18.604231317 +0000 UTC m=+0.261511797 container init ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0)
Dec  1 04:08:18 np0005540697 ceilometer_agent_compute[200222]: + sudo -E kolla_set_configs
Dec  1 04:08:18 np0005540697 auditd[703]: Audit daemon rotating log files
Dec  1 04:08:18 np0005540697 podman[200206]: 2025-12-01 09:08:18.633776147 +0000 UTC m=+0.291056557 container start ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=edpm, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute)
Dec  1 04:08:18 np0005540697 ceilometer_agent_compute[200222]: sudo: unable to send audit message: Operation not permitted
Dec  1 04:08:18 np0005540697 podman[200206]: ceilometer_agent_compute
Dec  1 04:08:18 np0005540697 systemd[1]: Started ceilometer_agent_compute container.
Dec  1 04:08:18 np0005540697 ceilometer_agent_compute[200222]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec  1 04:08:18 np0005540697 ceilometer_agent_compute[200222]: INFO:__main__:Validating config file
Dec  1 04:08:18 np0005540697 ceilometer_agent_compute[200222]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec  1 04:08:18 np0005540697 ceilometer_agent_compute[200222]: INFO:__main__:Copying service configuration files
Dec  1 04:08:18 np0005540697 ceilometer_agent_compute[200222]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Dec  1 04:08:18 np0005540697 ceilometer_agent_compute[200222]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Dec  1 04:08:18 np0005540697 ceilometer_agent_compute[200222]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Dec  1 04:08:18 np0005540697 ceilometer_agent_compute[200222]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Dec  1 04:08:18 np0005540697 ceilometer_agent_compute[200222]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml
Dec  1 04:08:18 np0005540697 ceilometer_agent_compute[200222]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Dec  1 04:08:18 np0005540697 ceilometer_agent_compute[200222]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec  1 04:08:18 np0005540697 ceilometer_agent_compute[200222]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec  1 04:08:18 np0005540697 ceilometer_agent_compute[200222]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec  1 04:08:18 np0005540697 ceilometer_agent_compute[200222]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec  1 04:08:18 np0005540697 ceilometer_agent_compute[200222]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec  1 04:08:18 np0005540697 ceilometer_agent_compute[200222]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec  1 04:08:18 np0005540697 ceilometer_agent_compute[200222]: INFO:__main__:Writing out command to execute
Dec  1 04:08:18 np0005540697 podman[200229]: 2025-12-01 09:08:18.718903282 +0000 UTC m=+0.065790568 container health_status ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=starting, health_failing_streak=1, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  1 04:08:18 np0005540697 systemd[1]: ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c-7e57991bcbe4a900.service: Main process exited, code=exited, status=1/FAILURE
Dec  1 04:08:18 np0005540697 systemd[1]: ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c-7e57991bcbe4a900.service: Failed with result 'exit-code'.
Dec  1 04:08:18 np0005540697 ceilometer_agent_compute[200222]: ++ cat /run_command
Dec  1 04:08:18 np0005540697 ceilometer_agent_compute[200222]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Dec  1 04:08:18 np0005540697 ceilometer_agent_compute[200222]: + ARGS=
Dec  1 04:08:18 np0005540697 ceilometer_agent_compute[200222]: + sudo kolla_copy_cacerts
Dec  1 04:08:18 np0005540697 ceilometer_agent_compute[200222]: sudo: unable to send audit message: Operation not permitted
Dec  1 04:08:18 np0005540697 ceilometer_agent_compute[200222]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Dec  1 04:08:18 np0005540697 ceilometer_agent_compute[200222]: + [[ ! -n '' ]]
Dec  1 04:08:18 np0005540697 ceilometer_agent_compute[200222]: + . kolla_extend_start
Dec  1 04:08:18 np0005540697 ceilometer_agent_compute[200222]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'\'''
Dec  1 04:08:18 np0005540697 ceilometer_agent_compute[200222]: + umask 0022
Dec  1 04:08:18 np0005540697 ceilometer_agent_compute[200222]: + exec /usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout
Dec  1 04:08:19 np0005540697 python3.9[200404]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/node_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.571 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:45
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.571 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.571 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.571 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.571 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.571 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.571 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.571 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.571 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.572 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.572 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.572 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.572 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.572 2 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.572 2 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.572 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.572 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.572 2 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.572 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.572 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.573 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.573 2 WARNING oslo_config.cfg [-] Deprecated: Option "tenant_name_discovery" from group "DEFAULT" is deprecated. Use option "identity_name_discovery" from group "DEFAULT".
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.573 2 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.573 2 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.573 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.573 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.573 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.573 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.573 2 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.573 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.573 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.574 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.574 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.574 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.574 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.574 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.574 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.574 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.574 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.574 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.574 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.574 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.574 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.574 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.575 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.575 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.575 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.575 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.575 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.575 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.575 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.575 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.575 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.575 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.575 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.575 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.576 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.576 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.576 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.576 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.576 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.576 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.576 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.576 2 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.576 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.576 2 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.576 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.576 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.576 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.576 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.577 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.577 2 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.577 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.577 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.577 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.577 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.577 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.577 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.577 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.577 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.577 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.577 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.578 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.578 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.578 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.578 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.578 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.578 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.578 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.578 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.578 2 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.578 2 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.578 2 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.579 2 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.579 2 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.579 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.579 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.579 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.579 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.579 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.579 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.579 2 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.579 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.579 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.579 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.579 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.580 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.580 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.580 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.580 2 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.580 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.580 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.580 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.580 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.580 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.580 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.580 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.580 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.580 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.581 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.581 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.581 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.581 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.581 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.581 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.581 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.581 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.581 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.581 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.581 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.581 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.581 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.582 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.582 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.582 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.582 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.582 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.582 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.582 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.582 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.582 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.582 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.582 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.582 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.582 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.582 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.583 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.583 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.583 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.583 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.583 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.583 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.583 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.603 12 INFO ceilometer.polling.manager [-] Starting heartbeat child service. Listening on /var/lib/ceilometer/ceilometer-compute.socket
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.604 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:53
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.604 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.604 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.604 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.604 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.605 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.605 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.605 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.605 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.605 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.605 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.605 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.605 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.605 12 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.605 12 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.606 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.606 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.606 12 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.606 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.606 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.606 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.606 12 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.606 12 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.606 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.606 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.606 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.606 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.606 12 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.607 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.607 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.607 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.607 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.607 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.607 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.607 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.607 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.607 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.607 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.607 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.607 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.607 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.608 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.608 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.608 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.608 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.608 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.608 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.608 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.608 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.608 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.608 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.608 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.608 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.608 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.608 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.609 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.609 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.609 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.609 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.609 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.609 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.609 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.609 12 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.609 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.609 12 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.609 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.609 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.609 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.610 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.610 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.610 12 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.610 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.610 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.610 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.610 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.610 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.610 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.610 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.610 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.611 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.611 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.611 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.611 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.611 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.611 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.611 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.611 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.611 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.612 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.612 12 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.612 12 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.612 12 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.612 12 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.612 12 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.612 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.612 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.612 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.612 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.613 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.613 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.613 12 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.613 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.613 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.613 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.613 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.613 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.613 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.614 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.614 12 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.614 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.614 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.614 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.614 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.614 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.614 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.614 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.615 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.615 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.615 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.615 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.615 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.615 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.615 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.615 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.615 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.616 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.616 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.616 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.616 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.616 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.616 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.616 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.616 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.616 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.616 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.617 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.617 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.617 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.617 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.617 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.617 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.617 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.617 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.617 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.618 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.618 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.618 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.618 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.618 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.618 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.618 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.618 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.618 12 DEBUG cotyledon._service [-] Run service AgentHeartBeatManager(0) [12] wait_forever /usr/lib/python3.12/site-packages/cotyledon/_service.py:263
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.620 12 DEBUG ceilometer.polling.manager [-] Started heartbeat child process. run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:519
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.622 12 DEBUG ceilometer.polling.manager [-] Started heartbeat update thread _read_queue /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:522
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.622 12 DEBUG ceilometer.polling.manager [-] Started heartbeat reporting thread _report_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:527
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.625 14 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/utils.py:96
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.632 14 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.632 14 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.632 14 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.742 14 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:53
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.742 14 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.742 14 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.742 14 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.743 14 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.743 14 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.743 14 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.743 14 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.743 14 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.743 14 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.743 14 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.743 14 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.743 14 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.743 14 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.743 14 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.744 14 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.744 14 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.744 14 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.744 14 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.744 14 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.744 14 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.744 14 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.744 14 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.744 14 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.744 14 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.744 14 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.745 14 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.745 14 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.745 14 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.745 14 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.745 14 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.745 14 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.745 14 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.745 14 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.745 14 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.745 14 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.745 14 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.745 14 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.745 14 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.746 14 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.746 14 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.746 14 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.746 14 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.746 14 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.746 14 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.746 14 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.746 14 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.746 14 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.746 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.746 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.746 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.746 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.747 14 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.747 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.747 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.747 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.747 14 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.747 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.747 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.747 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.747 14 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.747 14 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.747 14 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.747 14 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.748 14 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.748 14 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.748 14 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.748 14 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.748 14 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.748 14 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.748 14 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.748 14 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.748 14 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.748 14 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.748 14 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.748 14 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.749 14 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.749 14 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.749 14 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.749 14 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.749 14 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.749 14 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.749 14 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.749 14 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.749 14 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.750 14 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.750 14 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.750 14 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.750 14 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.750 14 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.750 14 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.750 14 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.750 14 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.751 14 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.751 14 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.751 14 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.751 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.751 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.751 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.751 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.751 14 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.751 14 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.751 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.752 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.752 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.752 14 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.752 14 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.752 14 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.752 14 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.752 14 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.752 14 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.752 14 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.753 14 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.753 14 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.753 14 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.753 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.753 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.753 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_url   = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.753 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.753 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.753 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.753 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.753 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.753 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_id  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.753 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.754 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.754 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.754 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.754 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.password   = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.754 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.754 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_name = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.754 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.754 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_name = service log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.754 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.754 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.754 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.system_scope = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.754 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.755 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.trust_id   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.755 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.755 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_name = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.755 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_id    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.755 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.username   = ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.755 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.755 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.755 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.755 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.755 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.755 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.756 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.756 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.756 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.756 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.756 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.756 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.756 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.756 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.756 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.756 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.756 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.756 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.756 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.757 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.757 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.757 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.757 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.757 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.757 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.757 14 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.757 14 DEBUG cotyledon._service [-] Run service AgentManager(0) [14] wait_forever /usr/lib/python3.12/site-packages/cotyledon/_service.py:263
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.760 14 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['power.state', 'cpu', 'memory.usage', 'disk.*', 'network.*']}]} load_config /usr/lib/python3.12/site-packages/ceilometer/agent.py:64
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.775 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.775 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.775 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b020>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.775 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7ff84c98b0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.776 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84dc55100>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.776 14 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/utils.py:96
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.776 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.776 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.776 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.777 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84ca1c260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.777 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.777 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.777 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84ff01af0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.777 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bb30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.777 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.777 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bb60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.777 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.777 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bbc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.777 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.778 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.778 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.778 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.778 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b5f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.778 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.778 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98be60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.778 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84f216690>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.780 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c9896d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.780 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.780 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98a720>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.780 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.780 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.781 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7ff8501e1d00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.781 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.781 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7ff84c98b110>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.781 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.781 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7ff84c98b1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.781 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.781 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7ff84c98b200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.781 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.781 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7ff84ca1c230>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.781 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.781 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7ff84c98b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.782 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.782 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7ff84c98b2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.782 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.782 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7ff84c98b620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.782 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.782 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7ff84c98b680>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.782 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.782 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7ff84c98b320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.782 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.782 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7ff84c98b920>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.782 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.782 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7ff84c98b380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.782 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.783 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7ff84c98bb90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.783 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.783 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7ff84c98bbf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.783 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.783 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7ff84c98bc80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.783 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.783 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7ff84c98bd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.783 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.783 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7ff84c98bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.783 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.783 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7ff84c98b5c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.783 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.783 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7ff84dc55040>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.783 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.783 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7ff84c98be30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.784 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.784 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7ff8503b1b80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.784 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.784 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7ff84dab3f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.784 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.784 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7ff84c98bec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.784 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.784 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7ff84c98b170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.784 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.784 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7ff84c98bf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.784 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.784 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.784 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.785 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.785 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.785 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.785 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.785 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.785 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.785 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.785 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.785 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.785 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.785 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.785 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.785 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.785 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.785 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.786 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.786 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.786 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.786 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.786 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.786 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.786 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.786 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 04:08:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:08:19.786 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 04:08:20 np0005540697 python3.9[200540]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/node_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764580098.903556-578-202797533643288/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  1 04:08:21 np0005540697 python3.9[200692]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=node_exporter.json debug=False
Dec  1 04:08:21 np0005540697 python3.9[200844]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  1 04:08:22 np0005540697 python3[200996]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=node_exporter.json log_base_path=/var/log/containers/stdouts debug=False
Dec  1 04:08:23 np0005540697 podman[201030]: 2025-12-01 09:08:23.179705822 +0000 UTC m=+0.100201569 container create dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, container_name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, config_id=edpm)
Dec  1 04:08:23 np0005540697 podman[201030]: 2025-12-01 09:08:23.106339637 +0000 UTC m=+0.026835364 image pull 0da6a335fe1356545476b749c68f022c897de3a2139e8f0054f6937349ee2b83 quay.io/prometheus/node-exporter:v1.5.0
Dec  1 04:08:23 np0005540697 python3[200996]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name node_exporter --conmon-pidfile /run/node_exporter.pid --env OS_ENDPOINT_TYPE=internal --healthcheck-command /openstack/healthcheck node_exporter --label config_id=edpm --label container_name=node_exporter --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9100:9100 --user root --volume /var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z --volume /var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw --volume /var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z quay.io/prometheus/node-exporter:v1.5.0 --web.config.file=/etc/node_exporter/node_exporter.yaml --web.disable-exporter-metrics --collector.systemd --collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service --no-collector.dmi --no-collector.entropy --no-collector.thermal_zone --no-collector.time --no-collector.timex --no-collector.uname --no-collector.stat --no-collector.hwmon --no-collector.os --no-collector.selinux --no-collector.textfile --no-collector.powersupplyclass --no-collector.pressure --no-collector.rapl
Dec  1 04:08:24 np0005540697 python3.9[201220]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 04:08:24 np0005540697 python3.9[201374]: ansible-file Invoked with path=/etc/systemd/system/edpm_node_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:08:25 np0005540697 python3.9[201525]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764580105.0282826-631-207340564556825/source dest=/etc/systemd/system/edpm_node_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:08:26 np0005540697 python3.9[201601]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  1 04:08:26 np0005540697 systemd[1]: Reloading.
Dec  1 04:08:26 np0005540697 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 04:08:26 np0005540697 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 04:08:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 09:08:26.489 106659 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 04:08:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 09:08:26.492 106659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 04:08:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 09:08:26.492 106659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 04:08:27 np0005540697 python3.9[201712]: ansible-systemd Invoked with state=restarted name=edpm_node_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 04:08:27 np0005540697 systemd[1]: Reloading.
Dec  1 04:08:27 np0005540697 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 04:08:27 np0005540697 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 04:08:27 np0005540697 systemd[1]: Starting node_exporter container...
Dec  1 04:08:27 np0005540697 systemd[1]: Started libcrun container.
Dec  1 04:08:27 np0005540697 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f721b21c5348e65d57932fa157d5187c1e252508b5bf2c3d57b3a98fa585b88/merged/etc/node_exporter/node_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec  1 04:08:27 np0005540697 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f721b21c5348e65d57932fa157d5187c1e252508b5bf2c3d57b3a98fa585b88/merged/etc/node_exporter/tls supports timestamps until 2038 (0x7fffffff)
Dec  1 04:08:28 np0005540697 systemd[1]: Started /usr/bin/podman healthcheck run dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30.
Dec  1 04:08:28 np0005540697 podman[201753]: 2025-12-01 09:08:28.089406978 +0000 UTC m=+0.260118030 container init dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  1 04:08:28 np0005540697 node_exporter[201769]: ts=2025-12-01T09:08:28.106Z caller=node_exporter.go:180 level=info msg="Starting node_exporter" version="(version=1.5.0, branch=HEAD, revision=1b48970ffcf5630534fb00bb0687d73c66d1c959)"
Dec  1 04:08:28 np0005540697 node_exporter[201769]: ts=2025-12-01T09:08:28.106Z caller=node_exporter.go:181 level=info msg="Build context" build_context="(go=go1.19.3, user=root@6e7732a7b81b, date=20221129-18:59:09)"
Dec  1 04:08:28 np0005540697 node_exporter[201769]: ts=2025-12-01T09:08:28.106Z caller=node_exporter.go:183 level=warn msg="Node Exporter is running as root user. This exporter is designed to run as unprivileged user, root is not required."
Dec  1 04:08:28 np0005540697 node_exporter[201769]: ts=2025-12-01T09:08:28.106Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Dec  1 04:08:28 np0005540697 node_exporter[201769]: ts=2025-12-01T09:08:28.106Z caller=diskstats_linux.go:264 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Dec  1 04:08:28 np0005540697 node_exporter[201769]: ts=2025-12-01T09:08:28.107Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Dec  1 04:08:28 np0005540697 node_exporter[201769]: ts=2025-12-01T09:08:28.107Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Dec  1 04:08:28 np0005540697 node_exporter[201769]: ts=2025-12-01T09:08:28.107Z caller=systemd_linux.go:152 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-include" flag=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service
Dec  1 04:08:28 np0005540697 node_exporter[201769]: ts=2025-12-01T09:08:28.107Z caller=systemd_linux.go:154 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-exclude" flag=.+\.(automount|device|mount|scope|slice)
Dec  1 04:08:28 np0005540697 node_exporter[201769]: ts=2025-12-01T09:08:28.107Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Dec  1 04:08:28 np0005540697 node_exporter[201769]: ts=2025-12-01T09:08:28.107Z caller=node_exporter.go:117 level=info collector=arp
Dec  1 04:08:28 np0005540697 node_exporter[201769]: ts=2025-12-01T09:08:28.107Z caller=node_exporter.go:117 level=info collector=bcache
Dec  1 04:08:28 np0005540697 node_exporter[201769]: ts=2025-12-01T09:08:28.107Z caller=node_exporter.go:117 level=info collector=bonding
Dec  1 04:08:28 np0005540697 node_exporter[201769]: ts=2025-12-01T09:08:28.107Z caller=node_exporter.go:117 level=info collector=btrfs
Dec  1 04:08:28 np0005540697 node_exporter[201769]: ts=2025-12-01T09:08:28.107Z caller=node_exporter.go:117 level=info collector=conntrack
Dec  1 04:08:28 np0005540697 node_exporter[201769]: ts=2025-12-01T09:08:28.107Z caller=node_exporter.go:117 level=info collector=cpu
Dec  1 04:08:28 np0005540697 node_exporter[201769]: ts=2025-12-01T09:08:28.107Z caller=node_exporter.go:117 level=info collector=cpufreq
Dec  1 04:08:28 np0005540697 node_exporter[201769]: ts=2025-12-01T09:08:28.107Z caller=node_exporter.go:117 level=info collector=diskstats
Dec  1 04:08:28 np0005540697 node_exporter[201769]: ts=2025-12-01T09:08:28.107Z caller=node_exporter.go:117 level=info collector=edac
Dec  1 04:08:28 np0005540697 node_exporter[201769]: ts=2025-12-01T09:08:28.107Z caller=node_exporter.go:117 level=info collector=fibrechannel
Dec  1 04:08:28 np0005540697 node_exporter[201769]: ts=2025-12-01T09:08:28.107Z caller=node_exporter.go:117 level=info collector=filefd
Dec  1 04:08:28 np0005540697 node_exporter[201769]: ts=2025-12-01T09:08:28.107Z caller=node_exporter.go:117 level=info collector=filesystem
Dec  1 04:08:28 np0005540697 node_exporter[201769]: ts=2025-12-01T09:08:28.107Z caller=node_exporter.go:117 level=info collector=infiniband
Dec  1 04:08:28 np0005540697 node_exporter[201769]: ts=2025-12-01T09:08:28.107Z caller=node_exporter.go:117 level=info collector=ipvs
Dec  1 04:08:28 np0005540697 node_exporter[201769]: ts=2025-12-01T09:08:28.107Z caller=node_exporter.go:117 level=info collector=loadavg
Dec  1 04:08:28 np0005540697 node_exporter[201769]: ts=2025-12-01T09:08:28.107Z caller=node_exporter.go:117 level=info collector=mdadm
Dec  1 04:08:28 np0005540697 node_exporter[201769]: ts=2025-12-01T09:08:28.107Z caller=node_exporter.go:117 level=info collector=meminfo
Dec  1 04:08:28 np0005540697 node_exporter[201769]: ts=2025-12-01T09:08:28.107Z caller=node_exporter.go:117 level=info collector=netclass
Dec  1 04:08:28 np0005540697 node_exporter[201769]: ts=2025-12-01T09:08:28.107Z caller=node_exporter.go:117 level=info collector=netdev
Dec  1 04:08:28 np0005540697 node_exporter[201769]: ts=2025-12-01T09:08:28.107Z caller=node_exporter.go:117 level=info collector=netstat
Dec  1 04:08:28 np0005540697 node_exporter[201769]: ts=2025-12-01T09:08:28.107Z caller=node_exporter.go:117 level=info collector=nfs
Dec  1 04:08:28 np0005540697 node_exporter[201769]: ts=2025-12-01T09:08:28.107Z caller=node_exporter.go:117 level=info collector=nfsd
Dec  1 04:08:28 np0005540697 node_exporter[201769]: ts=2025-12-01T09:08:28.107Z caller=node_exporter.go:117 level=info collector=nvme
Dec  1 04:08:28 np0005540697 node_exporter[201769]: ts=2025-12-01T09:08:28.107Z caller=node_exporter.go:117 level=info collector=schedstat
Dec  1 04:08:28 np0005540697 node_exporter[201769]: ts=2025-12-01T09:08:28.107Z caller=node_exporter.go:117 level=info collector=sockstat
Dec  1 04:08:28 np0005540697 node_exporter[201769]: ts=2025-12-01T09:08:28.107Z caller=node_exporter.go:117 level=info collector=softnet
Dec  1 04:08:28 np0005540697 node_exporter[201769]: ts=2025-12-01T09:08:28.107Z caller=node_exporter.go:117 level=info collector=systemd
Dec  1 04:08:28 np0005540697 node_exporter[201769]: ts=2025-12-01T09:08:28.107Z caller=node_exporter.go:117 level=info collector=tapestats
Dec  1 04:08:28 np0005540697 node_exporter[201769]: ts=2025-12-01T09:08:28.107Z caller=node_exporter.go:117 level=info collector=udp_queues
Dec  1 04:08:28 np0005540697 node_exporter[201769]: ts=2025-12-01T09:08:28.107Z caller=node_exporter.go:117 level=info collector=vmstat
Dec  1 04:08:28 np0005540697 node_exporter[201769]: ts=2025-12-01T09:08:28.107Z caller=node_exporter.go:117 level=info collector=xfs
Dec  1 04:08:28 np0005540697 node_exporter[201769]: ts=2025-12-01T09:08:28.107Z caller=node_exporter.go:117 level=info collector=zfs
Dec  1 04:08:28 np0005540697 node_exporter[201769]: ts=2025-12-01T09:08:28.108Z caller=tls_config.go:232 level=info msg="Listening on" address=[::]:9100
Dec  1 04:08:28 np0005540697 node_exporter[201769]: ts=2025-12-01T09:08:28.108Z caller=tls_config.go:268 level=info msg="TLS is enabled." http2=true address=[::]:9100
Dec  1 04:08:28 np0005540697 podman[201753]: 2025-12-01 09:08:28.125236365 +0000 UTC m=+0.295947447 container start dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  1 04:08:28 np0005540697 podman[201753]: node_exporter
Dec  1 04:08:28 np0005540697 systemd[1]: Started node_exporter container.
Dec  1 04:08:28 np0005540697 podman[201778]: 2025-12-01 09:08:28.205316094 +0000 UTC m=+0.066615027 container health_status dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  1 04:08:29 np0005540697 python3.9[201954]: ansible-ansible.builtin.systemd Invoked with name=edpm_node_exporter.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  1 04:08:29 np0005540697 systemd[1]: Stopping node_exporter container...
Dec  1 04:08:29 np0005540697 systemd[1]: libpod-dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30.scope: Deactivated successfully.
Dec  1 04:08:29 np0005540697 podman[201958]: 2025-12-01 09:08:29.189614038 +0000 UTC m=+0.067444638 container died dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  1 04:08:29 np0005540697 systemd[1]: dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30-5b335f7f17aca518.timer: Deactivated successfully.
Dec  1 04:08:29 np0005540697 systemd[1]: Stopped /usr/bin/podman healthcheck run dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30.
Dec  1 04:08:29 np0005540697 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30-userdata-shm.mount: Deactivated successfully.
Dec  1 04:08:29 np0005540697 systemd[1]: var-lib-containers-storage-overlay-8f721b21c5348e65d57932fa157d5187c1e252508b5bf2c3d57b3a98fa585b88-merged.mount: Deactivated successfully.
Dec  1 04:08:29 np0005540697 podman[201958]: 2025-12-01 09:08:29.351126691 +0000 UTC m=+0.228957291 container cleanup dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  1 04:08:29 np0005540697 podman[201958]: node_exporter
Dec  1 04:08:29 np0005540697 systemd[1]: edpm_node_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Dec  1 04:08:29 np0005540697 podman[201987]: node_exporter
Dec  1 04:08:29 np0005540697 systemd[1]: edpm_node_exporter.service: Failed with result 'exit-code'.
Dec  1 04:08:29 np0005540697 systemd[1]: Stopped node_exporter container.
Dec  1 04:08:29 np0005540697 systemd[1]: Starting node_exporter container...
Dec  1 04:08:29 np0005540697 systemd[1]: Started libcrun container.
Dec  1 04:08:29 np0005540697 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f721b21c5348e65d57932fa157d5187c1e252508b5bf2c3d57b3a98fa585b88/merged/etc/node_exporter/node_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec  1 04:08:29 np0005540697 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f721b21c5348e65d57932fa157d5187c1e252508b5bf2c3d57b3a98fa585b88/merged/etc/node_exporter/tls supports timestamps until 2038 (0x7fffffff)
Dec  1 04:08:29 np0005540697 systemd[1]: Started /usr/bin/podman healthcheck run dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30.
Dec  1 04:08:29 np0005540697 podman[202000]: 2025-12-01 09:08:29.879603127 +0000 UTC m=+0.431933351 container init dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  1 04:08:29 np0005540697 node_exporter[202015]: ts=2025-12-01T09:08:29.901Z caller=node_exporter.go:180 level=info msg="Starting node_exporter" version="(version=1.5.0, branch=HEAD, revision=1b48970ffcf5630534fb00bb0687d73c66d1c959)"
Dec  1 04:08:29 np0005540697 node_exporter[202015]: ts=2025-12-01T09:08:29.901Z caller=node_exporter.go:181 level=info msg="Build context" build_context="(go=go1.19.3, user=root@6e7732a7b81b, date=20221129-18:59:09)"
Dec  1 04:08:29 np0005540697 node_exporter[202015]: ts=2025-12-01T09:08:29.901Z caller=node_exporter.go:183 level=warn msg="Node Exporter is running as root user. This exporter is designed to run as unprivileged user, root is not required."
Dec  1 04:08:29 np0005540697 node_exporter[202015]: ts=2025-12-01T09:08:29.902Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Dec  1 04:08:29 np0005540697 node_exporter[202015]: ts=2025-12-01T09:08:29.902Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Dec  1 04:08:29 np0005540697 node_exporter[202015]: ts=2025-12-01T09:08:29.903Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Dec  1 04:08:29 np0005540697 node_exporter[202015]: ts=2025-12-01T09:08:29.903Z caller=diskstats_linux.go:264 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Dec  1 04:08:29 np0005540697 node_exporter[202015]: ts=2025-12-01T09:08:29.903Z caller=systemd_linux.go:152 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-include" flag=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service
Dec  1 04:08:29 np0005540697 node_exporter[202015]: ts=2025-12-01T09:08:29.903Z caller=systemd_linux.go:154 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-exclude" flag=.+\.(automount|device|mount|scope|slice)
Dec  1 04:08:29 np0005540697 node_exporter[202015]: ts=2025-12-01T09:08:29.903Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Dec  1 04:08:29 np0005540697 node_exporter[202015]: ts=2025-12-01T09:08:29.903Z caller=node_exporter.go:117 level=info collector=arp
Dec  1 04:08:29 np0005540697 node_exporter[202015]: ts=2025-12-01T09:08:29.903Z caller=node_exporter.go:117 level=info collector=bcache
Dec  1 04:08:29 np0005540697 node_exporter[202015]: ts=2025-12-01T09:08:29.903Z caller=node_exporter.go:117 level=info collector=bonding
Dec  1 04:08:29 np0005540697 node_exporter[202015]: ts=2025-12-01T09:08:29.903Z caller=node_exporter.go:117 level=info collector=btrfs
Dec  1 04:08:29 np0005540697 node_exporter[202015]: ts=2025-12-01T09:08:29.903Z caller=node_exporter.go:117 level=info collector=conntrack
Dec  1 04:08:29 np0005540697 node_exporter[202015]: ts=2025-12-01T09:08:29.903Z caller=node_exporter.go:117 level=info collector=cpu
Dec  1 04:08:29 np0005540697 node_exporter[202015]: ts=2025-12-01T09:08:29.903Z caller=node_exporter.go:117 level=info collector=cpufreq
Dec  1 04:08:29 np0005540697 node_exporter[202015]: ts=2025-12-01T09:08:29.903Z caller=node_exporter.go:117 level=info collector=diskstats
Dec  1 04:08:29 np0005540697 node_exporter[202015]: ts=2025-12-01T09:08:29.903Z caller=node_exporter.go:117 level=info collector=edac
Dec  1 04:08:29 np0005540697 node_exporter[202015]: ts=2025-12-01T09:08:29.903Z caller=node_exporter.go:117 level=info collector=fibrechannel
Dec  1 04:08:29 np0005540697 node_exporter[202015]: ts=2025-12-01T09:08:29.903Z caller=node_exporter.go:117 level=info collector=filefd
Dec  1 04:08:29 np0005540697 node_exporter[202015]: ts=2025-12-01T09:08:29.903Z caller=node_exporter.go:117 level=info collector=filesystem
Dec  1 04:08:29 np0005540697 node_exporter[202015]: ts=2025-12-01T09:08:29.903Z caller=node_exporter.go:117 level=info collector=infiniband
Dec  1 04:08:29 np0005540697 node_exporter[202015]: ts=2025-12-01T09:08:29.903Z caller=node_exporter.go:117 level=info collector=ipvs
Dec  1 04:08:29 np0005540697 node_exporter[202015]: ts=2025-12-01T09:08:29.903Z caller=node_exporter.go:117 level=info collector=loadavg
Dec  1 04:08:29 np0005540697 node_exporter[202015]: ts=2025-12-01T09:08:29.903Z caller=node_exporter.go:117 level=info collector=mdadm
Dec  1 04:08:29 np0005540697 node_exporter[202015]: ts=2025-12-01T09:08:29.903Z caller=node_exporter.go:117 level=info collector=meminfo
Dec  1 04:08:29 np0005540697 node_exporter[202015]: ts=2025-12-01T09:08:29.903Z caller=node_exporter.go:117 level=info collector=netclass
Dec  1 04:08:29 np0005540697 node_exporter[202015]: ts=2025-12-01T09:08:29.903Z caller=node_exporter.go:117 level=info collector=netdev
Dec  1 04:08:29 np0005540697 node_exporter[202015]: ts=2025-12-01T09:08:29.903Z caller=node_exporter.go:117 level=info collector=netstat
Dec  1 04:08:29 np0005540697 node_exporter[202015]: ts=2025-12-01T09:08:29.903Z caller=node_exporter.go:117 level=info collector=nfs
Dec  1 04:08:29 np0005540697 node_exporter[202015]: ts=2025-12-01T09:08:29.903Z caller=node_exporter.go:117 level=info collector=nfsd
Dec  1 04:08:29 np0005540697 node_exporter[202015]: ts=2025-12-01T09:08:29.903Z caller=node_exporter.go:117 level=info collector=nvme
Dec  1 04:08:29 np0005540697 node_exporter[202015]: ts=2025-12-01T09:08:29.903Z caller=node_exporter.go:117 level=info collector=schedstat
Dec  1 04:08:29 np0005540697 node_exporter[202015]: ts=2025-12-01T09:08:29.904Z caller=node_exporter.go:117 level=info collector=sockstat
Dec  1 04:08:29 np0005540697 node_exporter[202015]: ts=2025-12-01T09:08:29.904Z caller=node_exporter.go:117 level=info collector=softnet
Dec  1 04:08:29 np0005540697 node_exporter[202015]: ts=2025-12-01T09:08:29.904Z caller=node_exporter.go:117 level=info collector=systemd
Dec  1 04:08:29 np0005540697 node_exporter[202015]: ts=2025-12-01T09:08:29.904Z caller=node_exporter.go:117 level=info collector=tapestats
Dec  1 04:08:29 np0005540697 node_exporter[202015]: ts=2025-12-01T09:08:29.904Z caller=node_exporter.go:117 level=info collector=udp_queues
Dec  1 04:08:29 np0005540697 node_exporter[202015]: ts=2025-12-01T09:08:29.904Z caller=node_exporter.go:117 level=info collector=vmstat
Dec  1 04:08:29 np0005540697 node_exporter[202015]: ts=2025-12-01T09:08:29.904Z caller=node_exporter.go:117 level=info collector=xfs
Dec  1 04:08:29 np0005540697 node_exporter[202015]: ts=2025-12-01T09:08:29.904Z caller=node_exporter.go:117 level=info collector=zfs
Dec  1 04:08:29 np0005540697 node_exporter[202015]: ts=2025-12-01T09:08:29.904Z caller=tls_config.go:232 level=info msg="Listening on" address=[::]:9100
Dec  1 04:08:29 np0005540697 node_exporter[202015]: ts=2025-12-01T09:08:29.905Z caller=tls_config.go:268 level=info msg="TLS is enabled." http2=true address=[::]:9100
Dec  1 04:08:29 np0005540697 podman[202000]: 2025-12-01 09:08:29.919275437 +0000 UTC m=+0.471605571 container start dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  1 04:08:29 np0005540697 podman[202000]: node_exporter
Dec  1 04:08:29 np0005540697 systemd[1]: Started node_exporter container.
Dec  1 04:08:30 np0005540697 podman[202024]: 2025-12-01 09:08:30.015917496 +0000 UTC m=+0.077637280 container health_status dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  1 04:08:30 np0005540697 python3.9[202199]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/podman_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:08:31 np0005540697 python3.9[202322]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/podman_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764580110.1975417-663-106839264660314/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  1 04:08:32 np0005540697 python3.9[202474]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=podman_exporter.json debug=False
Dec  1 04:08:33 np0005540697 python3.9[202626]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  1 04:08:33 np0005540697 podman[202731]: 2025-12-01 09:08:33.69834905 +0000 UTC m=+0.071798425 container health_status f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Dec  1 04:08:34 np0005540697 python3[202798]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=podman_exporter.json log_base_path=/var/log/containers/stdouts debug=False
Dec  1 04:08:36 np0005540697 podman[202854]: 2025-12-01 09:08:36.255130293 +0000 UTC m=+0.407964591 container health_status 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=multipathd, org.label-schema.license=GPLv2)
Dec  1 04:08:36 np0005540697 podman[202855]: 2025-12-01 09:08:36.28501948 +0000 UTC m=+0.430149401 container health_status 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  1 04:08:37 np0005540697 podman[202811]: 2025-12-01 09:08:37.180075024 +0000 UTC m=+3.098338681 image pull e56d40e393eb5ea8704d9af8cf0d74665df83747106713fda91530f201837815 quay.io/navidys/prometheus-podman-exporter:v1.10.1
Dec  1 04:08:37 np0005540697 podman[202951]: 2025-12-01 09:08:37.33919702 +0000 UTC m=+0.027930259 image pull e56d40e393eb5ea8704d9af8cf0d74665df83747106713fda91530f201837815 quay.io/navidys/prometheus-podman-exporter:v1.10.1
Dec  1 04:08:38 np0005540697 podman[202951]: 2025-12-01 09:08:38.381653075 +0000 UTC m=+1.070386294 container create 6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, container_name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, config_id=edpm)
Dec  1 04:08:38 np0005540697 python3[202798]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name podman_exporter --conmon-pidfile /run/podman_exporter.pid --env OS_ENDPOINT_TYPE=internal --env CONTAINER_HOST=unix:///run/podman/podman.sock --healthcheck-command /openstack/healthcheck podman_exporter --label config_id=edpm --label container_name=podman_exporter --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9882:9882 --user root --volume /var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z --volume /run/podman/podman.sock:/run/podman/podman.sock:rw,z --volume /var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z quay.io/navidys/prometheus-podman-exporter:v1.10.1 --web.config.file=/etc/podman_exporter/podman_exporter.yaml
Dec  1 04:08:39 np0005540697 python3.9[203142]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 04:08:40 np0005540697 python3.9[203296]: ansible-file Invoked with path=/etc/systemd/system/edpm_podman_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:08:40 np0005540697 python3.9[203447]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764580120.1434138-716-168241045088002/source dest=/etc/systemd/system/edpm_podman_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:08:41 np0005540697 python3.9[203523]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  1 04:08:41 np0005540697 systemd[1]: Reloading.
Dec  1 04:08:41 np0005540697 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 04:08:41 np0005540697 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 04:08:42 np0005540697 python3.9[203634]: ansible-systemd Invoked with state=restarted name=edpm_podman_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 04:08:42 np0005540697 systemd[1]: Reloading.
Dec  1 04:08:42 np0005540697 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 04:08:42 np0005540697 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 04:08:43 np0005540697 systemd[1]: Starting podman_exporter container...
Dec  1 04:08:43 np0005540697 systemd[1]: Started libcrun container.
Dec  1 04:08:43 np0005540697 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72fe19dd023e31366c92b12bfe720a6506737d5dbcacba4b3c2699e5c8488c52/merged/etc/podman_exporter/podman_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec  1 04:08:43 np0005540697 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72fe19dd023e31366c92b12bfe720a6506737d5dbcacba4b3c2699e5c8488c52/merged/etc/podman_exporter/tls supports timestamps until 2038 (0x7fffffff)
Dec  1 04:08:43 np0005540697 systemd[1]: Started /usr/bin/podman healthcheck run 6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62.
Dec  1 04:08:43 np0005540697 podman[203674]: 2025-12-01 09:08:43.538872244 +0000 UTC m=+0.316806328 container init 6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  1 04:08:43 np0005540697 podman_exporter[203689]: ts=2025-12-01T09:08:43.554Z caller=exporter.go:68 level=info msg="Starting podman-prometheus-exporter" version="(version=1.10.1, branch=HEAD, revision=1)"
Dec  1 04:08:43 np0005540697 podman_exporter[203689]: ts=2025-12-01T09:08:43.554Z caller=exporter.go:69 level=info msg=metrics enhanced=false
Dec  1 04:08:43 np0005540697 podman_exporter[203689]: ts=2025-12-01T09:08:43.554Z caller=handler.go:94 level=info msg="enabled collectors"
Dec  1 04:08:43 np0005540697 podman_exporter[203689]: ts=2025-12-01T09:08:43.554Z caller=handler.go:105 level=info collector=container
Dec  1 04:08:43 np0005540697 podman[203674]: 2025-12-01 09:08:43.566853254 +0000 UTC m=+0.344787328 container start 6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  1 04:08:43 np0005540697 podman[203674]: podman_exporter
Dec  1 04:08:43 np0005540697 systemd[1]: Starting Podman API Service...
Dec  1 04:08:43 np0005540697 systemd[1]: Started Podman API Service.
Dec  1 04:08:43 np0005540697 systemd[1]: Started podman_exporter container.
Dec  1 04:08:43 np0005540697 podman[203700]: time="2025-12-01T09:08:43Z" level=info msg="/usr/bin/podman filtering at log level info"
Dec  1 04:08:43 np0005540697 podman[203700]: time="2025-12-01T09:08:43Z" level=info msg="Setting parallel job count to 25"
Dec  1 04:08:43 np0005540697 podman[203700]: time="2025-12-01T09:08:43Z" level=info msg="Using sqlite as database backend"
Dec  1 04:08:43 np0005540697 podman[203700]: time="2025-12-01T09:08:43Z" level=info msg="Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled"
Dec  1 04:08:43 np0005540697 podman[203700]: time="2025-12-01T09:08:43Z" level=info msg="Using systemd socket activation to determine API endpoint"
Dec  1 04:08:43 np0005540697 podman[203700]: time="2025-12-01T09:08:43Z" level=info msg="API service listening on \"/run/podman/podman.sock\". URI: \"unix:///run/podman/podman.sock\""
Dec  1 04:08:43 np0005540697 podman[203700]: @ - - [01/Dec/2025:09:08:43 +0000] "GET /v4.9.3/libpod/_ping HTTP/1.1" 200 2 "" "Go-http-client/1.1"
Dec  1 04:08:43 np0005540697 podman[203700]: time="2025-12-01T09:08:43Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 04:08:43 np0005540697 podman[203700]: @ - - [01/Dec/2025:09:08:43 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=true&sync=false HTTP/1.1" 200 19588 "" "Go-http-client/1.1"
Dec  1 04:08:43 np0005540697 podman_exporter[203689]: ts=2025-12-01T09:08:43.634Z caller=exporter.go:96 level=info msg="Listening on" address=:9882
Dec  1 04:08:43 np0005540697 podman_exporter[203689]: ts=2025-12-01T09:08:43.634Z caller=tls_config.go:313 level=info msg="Listening on" address=[::]:9882
Dec  1 04:08:43 np0005540697 podman_exporter[203689]: ts=2025-12-01T09:08:43.635Z caller=tls_config.go:349 level=info msg="TLS is enabled." http2=true address=[::]:9882
Dec  1 04:08:43 np0005540697 podman[203698]: 2025-12-01 09:08:43.662850005 +0000 UTC m=+0.083392086 container health_status 6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=starting, health_failing_streak=1, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  1 04:08:43 np0005540697 systemd[1]: 6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62-6cb2f008a720c7bf.service: Main process exited, code=exited, status=1/FAILURE
Dec  1 04:08:43 np0005540697 systemd[1]: 6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62-6cb2f008a720c7bf.service: Failed with result 'exit-code'.
Dec  1 04:08:44 np0005540697 python3.9[203886]: ansible-ansible.builtin.systemd Invoked with name=edpm_podman_exporter.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  1 04:08:44 np0005540697 systemd[1]: Stopping podman_exporter container...
Dec  1 04:08:44 np0005540697 podman[203700]: @ - - [01/Dec/2025:09:08:43 +0000] "GET /v4.9.3/libpod/events?filters=%7B%7D&since=&stream=true&until= HTTP/1.1" 200 1449 "" "Go-http-client/1.1"
Dec  1 04:08:44 np0005540697 systemd[1]: libpod-6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62.scope: Deactivated successfully.
Dec  1 04:08:44 np0005540697 podman[203890]: 2025-12-01 09:08:44.923947041 +0000 UTC m=+0.340262806 container died 6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  1 04:08:44 np0005540697 systemd[1]: 6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62-6cb2f008a720c7bf.timer: Deactivated successfully.
Dec  1 04:08:44 np0005540697 systemd[1]: Stopped /usr/bin/podman healthcheck run 6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62.
Dec  1 04:08:45 np0005540697 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62-userdata-shm.mount: Deactivated successfully.
Dec  1 04:08:45 np0005540697 systemd[1]: var-lib-containers-storage-overlay-72fe19dd023e31366c92b12bfe720a6506737d5dbcacba4b3c2699e5c8488c52-merged.mount: Deactivated successfully.
Dec  1 04:08:46 np0005540697 podman[203890]: 2025-12-01 09:08:46.158894234 +0000 UTC m=+1.575209989 container cleanup 6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  1 04:08:46 np0005540697 podman[203890]: podman_exporter
Dec  1 04:08:46 np0005540697 systemd[1]: edpm_podman_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Dec  1 04:08:46 np0005540697 podman[203917]: podman_exporter
Dec  1 04:08:46 np0005540697 systemd[1]: edpm_podman_exporter.service: Failed with result 'exit-code'.
Dec  1 04:08:46 np0005540697 systemd[1]: Stopped podman_exporter container.
Dec  1 04:08:46 np0005540697 systemd[1]: Starting podman_exporter container...
Dec  1 04:08:46 np0005540697 systemd[1]: Started libcrun container.
Dec  1 04:08:46 np0005540697 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72fe19dd023e31366c92b12bfe720a6506737d5dbcacba4b3c2699e5c8488c52/merged/etc/podman_exporter/podman_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec  1 04:08:46 np0005540697 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72fe19dd023e31366c92b12bfe720a6506737d5dbcacba4b3c2699e5c8488c52/merged/etc/podman_exporter/tls supports timestamps until 2038 (0x7fffffff)
Dec  1 04:08:47 np0005540697 systemd[1]: Started /usr/bin/podman healthcheck run 6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62.
Dec  1 04:08:47 np0005540697 podman[203930]: 2025-12-01 09:08:47.225168637 +0000 UTC m=+0.941551565 container init 6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  1 04:08:47 np0005540697 podman_exporter[203945]: ts=2025-12-01T09:08:47.236Z caller=exporter.go:68 level=info msg="Starting podman-prometheus-exporter" version="(version=1.10.1, branch=HEAD, revision=1)"
Dec  1 04:08:47 np0005540697 podman_exporter[203945]: ts=2025-12-01T09:08:47.237Z caller=exporter.go:69 level=info msg=metrics enhanced=false
Dec  1 04:08:47 np0005540697 podman_exporter[203945]: ts=2025-12-01T09:08:47.237Z caller=handler.go:94 level=info msg="enabled collectors"
Dec  1 04:08:47 np0005540697 podman_exporter[203945]: ts=2025-12-01T09:08:47.237Z caller=handler.go:105 level=info collector=container
Dec  1 04:08:47 np0005540697 podman[203700]: @ - - [01/Dec/2025:09:08:47 +0000] "GET /v4.9.3/libpod/_ping HTTP/1.1" 200 2 "" "Go-http-client/1.1"
Dec  1 04:08:47 np0005540697 podman[203700]: time="2025-12-01T09:08:47Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 04:08:47 np0005540697 podman[203930]: 2025-12-01 09:08:47.248154166 +0000 UTC m=+0.964537084 container start 6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  1 04:08:47 np0005540697 podman[203930]: podman_exporter
Dec  1 04:08:47 np0005540697 systemd[1]: Started podman_exporter container.
Dec  1 04:08:47 np0005540697 podman[203700]: @ - - [01/Dec/2025:09:08:47 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=true&sync=false HTTP/1.1" 200 19590 "" "Go-http-client/1.1"
Dec  1 04:08:47 np0005540697 podman_exporter[203945]: ts=2025-12-01T09:08:47.583Z caller=exporter.go:96 level=info msg="Listening on" address=:9882
Dec  1 04:08:47 np0005540697 podman_exporter[203945]: ts=2025-12-01T09:08:47.584Z caller=tls_config.go:313 level=info msg="Listening on" address=[::]:9882
Dec  1 04:08:47 np0005540697 podman_exporter[203945]: ts=2025-12-01T09:08:47.585Z caller=tls_config.go:349 level=info msg="TLS is enabled." http2=true address=[::]:9882
Dec  1 04:08:47 np0005540697 podman[203955]: 2025-12-01 09:08:47.644852853 +0000 UTC m=+0.386958922 container health_status 6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  1 04:08:48 np0005540697 python3.9[204130]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/openstack_network_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:08:48 np0005540697 python3.9[204253]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/openstack_network_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764580127.7414749-748-236900274774105/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  1 04:08:48 np0005540697 podman[204254]: 2025-12-01 09:08:48.888195348 +0000 UTC m=+0.061168167 container health_status ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=starting, health_failing_streak=2, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute)
Dec  1 04:08:48 np0005540697 systemd[1]: ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c-7e57991bcbe4a900.service: Main process exited, code=exited, status=1/FAILURE
Dec  1 04:08:48 np0005540697 systemd[1]: ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c-7e57991bcbe4a900.service: Failed with result 'exit-code'.
Dec  1 04:08:49 np0005540697 python3.9[204424]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=openstack_network_exporter.json debug=False
Dec  1 04:08:50 np0005540697 python3.9[204576]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  1 04:08:51 np0005540697 python3[204728]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=openstack_network_exporter.json log_base_path=/var/log/containers/stdouts debug=False
Dec  1 04:08:56 np0005540697 podman[204741]: 2025-12-01 09:08:56.226724697 +0000 UTC m=+4.517369693 image pull 186c5e97c6f6912533851a0044ea6da23938910e7bddfb4a6c0be9b48ab2a1d1 quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Dec  1 04:08:56 np0005540697 podman[204839]: 2025-12-01 09:08:56.356729766 +0000 UTC m=+0.048987771 container create 110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, architecture=x86_64, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, distribution-scope=public, io.openshift.tags=minimal rhel9, config_id=edpm, io.buildah.version=1.33.7, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal)
Dec  1 04:08:56 np0005540697 podman[204839]: 2025-12-01 09:08:56.332700582 +0000 UTC m=+0.024958607 image pull 186c5e97c6f6912533851a0044ea6da23938910e7bddfb4a6c0be9b48ab2a1d1 quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Dec  1 04:08:56 np0005540697 python3[204728]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name openstack_network_exporter --conmon-pidfile /run/openstack_network_exporter.pid --env OS_ENDPOINT_TYPE=internal --env OPENSTACK_NETWORK_EXPORTER_YAML=/etc/openstack_network_exporter/openstack_network_exporter.yaml --healthcheck-command /openstack/healthcheck openstack-netwo --label config_id=edpm --label container_name=openstack_network_exporter --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9105:9105 --volume /var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z --volume /var/run/openvswitch:/run/openvswitch:rw,z --volume /var/lib/openvswitch/ovn:/run/ovn:rw,z --volume /proc:/host/proc:ro --volume /var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Dec  1 04:08:57 np0005540697 python3.9[205028]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 04:08:58 np0005540697 python3.9[205182]: ansible-file Invoked with path=/etc/systemd/system/edpm_openstack_network_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:08:58 np0005540697 python3.9[205333]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764580138.1681004-801-90611126444666/source dest=/etc/systemd/system/edpm_openstack_network_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:08:59 np0005540697 python3.9[205409]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  1 04:08:59 np0005540697 systemd[1]: Reloading.
Dec  1 04:08:59 np0005540697 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 04:08:59 np0005540697 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 04:09:00 np0005540697 podman[205445]: 2025-12-01 09:09:00.13194119 +0000 UTC m=+0.062853468 container health_status dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  1 04:09:01 np0005540697 python3.9[205544]: ansible-systemd Invoked with state=restarted name=edpm_openstack_network_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 04:09:01 np0005540697 systemd[1]: Reloading.
Dec  1 04:09:01 np0005540697 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 04:09:01 np0005540697 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 04:09:01 np0005540697 systemd[1]: Starting openstack_network_exporter container...
Dec  1 04:09:02 np0005540697 systemd[1]: Started libcrun container.
Dec  1 04:09:02 np0005540697 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25f4e6d06edde0d612077c5a0f719676c8ec836e1b1de13bfd322e812ff35743/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Dec  1 04:09:02 np0005540697 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25f4e6d06edde0d612077c5a0f719676c8ec836e1b1de13bfd322e812ff35743/merged/etc/openstack_network_exporter/openstack_network_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec  1 04:09:02 np0005540697 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25f4e6d06edde0d612077c5a0f719676c8ec836e1b1de13bfd322e812ff35743/merged/etc/openstack_network_exporter/tls supports timestamps until 2038 (0x7fffffff)
Dec  1 04:09:02 np0005540697 systemd[1]: Started /usr/bin/podman healthcheck run 110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0.
Dec  1 04:09:02 np0005540697 podman[205586]: 2025-12-01 09:09:02.792033883 +0000 UTC m=+1.009536796 container init 110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, vcs-type=git, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., config_id=edpm, io.openshift.expose-services=, release=1755695350, architecture=x86_64, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, distribution-scope=public, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible)
Dec  1 04:09:02 np0005540697 openstack_network_exporter[205602]: INFO    09:09:02 main.go:48: registering *bridge.Collector
Dec  1 04:09:02 np0005540697 openstack_network_exporter[205602]: INFO    09:09:02 main.go:48: registering *coverage.Collector
Dec  1 04:09:02 np0005540697 openstack_network_exporter[205602]: INFO    09:09:02 main.go:48: registering *datapath.Collector
Dec  1 04:09:02 np0005540697 openstack_network_exporter[205602]: INFO    09:09:02 main.go:48: registering *iface.Collector
Dec  1 04:09:02 np0005540697 openstack_network_exporter[205602]: INFO    09:09:02 main.go:48: registering *memory.Collector
Dec  1 04:09:02 np0005540697 openstack_network_exporter[205602]: INFO    09:09:02 main.go:48: registering *ovnnorthd.Collector
Dec  1 04:09:02 np0005540697 openstack_network_exporter[205602]: INFO    09:09:02 main.go:48: registering *ovn.Collector
Dec  1 04:09:02 np0005540697 openstack_network_exporter[205602]: INFO    09:09:02 main.go:48: registering *ovsdbserver.Collector
Dec  1 04:09:02 np0005540697 openstack_network_exporter[205602]: INFO    09:09:02 main.go:48: registering *pmd_perf.Collector
Dec  1 04:09:02 np0005540697 openstack_network_exporter[205602]: INFO    09:09:02 main.go:48: registering *pmd_rxq.Collector
Dec  1 04:09:02 np0005540697 openstack_network_exporter[205602]: INFO    09:09:02 main.go:48: registering *vswitch.Collector
Dec  1 04:09:02 np0005540697 openstack_network_exporter[205602]: NOTICE  09:09:02 main.go:76: listening on https://:9105/metrics
Dec  1 04:09:02 np0005540697 podman[205586]: 2025-12-01 09:09:02.829750049 +0000 UTC m=+1.047252942 container start 110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, container_name=openstack_network_exporter, version=9.6, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.buildah.version=1.33.7, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, distribution-scope=public, maintainer=Red Hat, Inc., release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible)
Dec  1 04:09:02 np0005540697 podman[205586]: openstack_network_exporter
Dec  1 04:09:02 np0005540697 systemd[1]: Started openstack_network_exporter container.
Dec  1 04:09:02 np0005540697 podman[205612]: 2025-12-01 09:09:02.931614844 +0000 UTC m=+0.088521851 container health_status 110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, vcs-type=git, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, architecture=x86_64, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., managed_by=edpm_ansible, name=ubi9-minimal, release=1755695350)
Dec  1 04:09:03 np0005540697 python3.9[205786]: ansible-ansible.builtin.systemd Invoked with name=edpm_openstack_network_exporter.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  1 04:09:03 np0005540697 systemd[1]: Stopping openstack_network_exporter container...
Dec  1 04:09:03 np0005540697 podman[205788]: 2025-12-01 09:09:03.991641556 +0000 UTC m=+0.201148017 container health_status f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Dec  1 04:09:03 np0005540697 systemd[1]: libpod-110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0.scope: Deactivated successfully.
Dec  1 04:09:04 np0005540697 podman[205800]: 2025-12-01 09:09:04.002164462 +0000 UTC m=+0.177941584 container died 110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., name=ubi9-minimal, vendor=Red Hat, Inc., version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, config_id=edpm, managed_by=edpm_ansible, container_name=openstack_network_exporter, vcs-type=git, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41)
Dec  1 04:09:04 np0005540697 systemd[1]: 110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0-55bd23547ce45798.timer: Deactivated successfully.
Dec  1 04:09:04 np0005540697 systemd[1]: Stopped /usr/bin/podman healthcheck run 110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0.
Dec  1 04:09:04 np0005540697 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0-userdata-shm.mount: Deactivated successfully.
Dec  1 04:09:04 np0005540697 systemd[1]: var-lib-containers-storage-overlay-25f4e6d06edde0d612077c5a0f719676c8ec836e1b1de13bfd322e812ff35743-merged.mount: Deactivated successfully.
Dec  1 04:09:05 np0005540697 podman[205800]: 2025-12-01 09:09:05.18017212 +0000 UTC m=+1.355949222 container cleanup 110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, vcs-type=git, config_id=edpm, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., version=9.6, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, release=1755695350, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  1 04:09:05 np0005540697 podman[205800]: openstack_network_exporter
Dec  1 04:09:05 np0005540697 systemd[1]: edpm_openstack_network_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Dec  1 04:09:05 np0005540697 podman[205837]: openstack_network_exporter
Dec  1 04:09:05 np0005540697 systemd[1]: edpm_openstack_network_exporter.service: Failed with result 'exit-code'.
Dec  1 04:09:05 np0005540697 systemd[1]: Stopped openstack_network_exporter container.
Dec  1 04:09:05 np0005540697 systemd[1]: Starting openstack_network_exporter container...
Dec  1 04:09:05 np0005540697 systemd[1]: Started libcrun container.
Dec  1 04:09:05 np0005540697 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25f4e6d06edde0d612077c5a0f719676c8ec836e1b1de13bfd322e812ff35743/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Dec  1 04:09:05 np0005540697 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25f4e6d06edde0d612077c5a0f719676c8ec836e1b1de13bfd322e812ff35743/merged/etc/openstack_network_exporter/openstack_network_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec  1 04:09:05 np0005540697 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25f4e6d06edde0d612077c5a0f719676c8ec836e1b1de13bfd322e812ff35743/merged/etc/openstack_network_exporter/tls supports timestamps until 2038 (0x7fffffff)
Dec  1 04:09:05 np0005540697 systemd[1]: Started /usr/bin/podman healthcheck run 110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0.
Dec  1 04:09:05 np0005540697 podman[205849]: 2025-12-01 09:09:05.392544119 +0000 UTC m=+0.113801056 container init 110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, version=9.6, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, maintainer=Red Hat, Inc., architecture=x86_64)
Dec  1 04:09:05 np0005540697 openstack_network_exporter[205866]: INFO    09:09:05 main.go:48: registering *bridge.Collector
Dec  1 04:09:05 np0005540697 openstack_network_exporter[205866]: INFO    09:09:05 main.go:48: registering *coverage.Collector
Dec  1 04:09:05 np0005540697 openstack_network_exporter[205866]: INFO    09:09:05 main.go:48: registering *datapath.Collector
Dec  1 04:09:05 np0005540697 openstack_network_exporter[205866]: INFO    09:09:05 main.go:48: registering *iface.Collector
Dec  1 04:09:05 np0005540697 openstack_network_exporter[205866]: INFO    09:09:05 main.go:48: registering *memory.Collector
Dec  1 04:09:05 np0005540697 openstack_network_exporter[205866]: INFO    09:09:05 main.go:48: registering *ovnnorthd.Collector
Dec  1 04:09:05 np0005540697 openstack_network_exporter[205866]: INFO    09:09:05 main.go:48: registering *ovn.Collector
Dec  1 04:09:05 np0005540697 openstack_network_exporter[205866]: INFO    09:09:05 main.go:48: registering *ovsdbserver.Collector
Dec  1 04:09:05 np0005540697 openstack_network_exporter[205866]: INFO    09:09:05 main.go:48: registering *pmd_perf.Collector
Dec  1 04:09:05 np0005540697 openstack_network_exporter[205866]: INFO    09:09:05 main.go:48: registering *pmd_rxq.Collector
Dec  1 04:09:05 np0005540697 openstack_network_exporter[205866]: INFO    09:09:05 main.go:48: registering *vswitch.Collector
Dec  1 04:09:05 np0005540697 openstack_network_exporter[205866]: NOTICE  09:09:05 main.go:76: listening on https://:9105/metrics
Dec  1 04:09:05 np0005540697 podman[205849]: 2025-12-01 09:09:05.426913614 +0000 UTC m=+0.148170571 container start 110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, maintainer=Red Hat, Inc., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., version=9.6, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, architecture=x86_64, name=ubi9-minimal, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, vcs-type=git)
Dec  1 04:09:05 np0005540697 podman[205849]: openstack_network_exporter
Dec  1 04:09:05 np0005540697 systemd[1]: Started openstack_network_exporter container.
Dec  1 04:09:05 np0005540697 podman[205876]: 2025-12-01 09:09:05.503338341 +0000 UTC m=+0.060913252 container health_status 110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, architecture=x86_64, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, container_name=openstack_network_exporter, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible)
Dec  1 04:09:06 np0005540697 python3.9[206048]: ansible-ansible.builtin.find Invoked with file_type=directory paths=['/var/lib/openstack/healthchecks/'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec  1 04:09:07 np0005540697 podman[206172]: 2025-12-01 09:09:07.227794215 +0000 UTC m=+0.060513862 container health_status 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 04:09:07 np0005540697 podman[206173]: 2025-12-01 09:09:07.256704686 +0000 UTC m=+0.088760057 container health_status 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 04:09:07 np0005540697 python3.9[206244]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_controller'] executable=podman
Dec  1 04:09:08 np0005540697 python3.9[206413]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  1 04:09:08 np0005540697 systemd[1]: Started libpod-conmon-8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4.scope.
Dec  1 04:09:08 np0005540697 podman[206414]: 2025-12-01 09:09:08.810698599 +0000 UTC m=+0.304263383 container exec 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller)
Dec  1 04:09:08 np0005540697 podman[206414]: 2025-12-01 09:09:08.850478495 +0000 UTC m=+0.344043219 container exec_died 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  1 04:09:08 np0005540697 systemd[1]: libpod-conmon-8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4.scope: Deactivated successfully.
Dec  1 04:09:09 np0005540697 python3.9[206595]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  1 04:09:09 np0005540697 systemd[1]: Started libpod-conmon-8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4.scope.
Dec  1 04:09:09 np0005540697 podman[206596]: 2025-12-01 09:09:09.875765833 +0000 UTC m=+0.143185290 container exec 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 04:09:09 np0005540697 podman[206596]: 2025-12-01 09:09:09.91847062 +0000 UTC m=+0.185890067 container exec_died 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller)
Dec  1 04:09:09 np0005540697 systemd[1]: libpod-conmon-8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4.scope: Deactivated successfully.
Dec  1 04:09:10 np0005540697 python3.9[206780]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_controller recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:09:11 np0005540697 nova_compute[189491]: 2025-12-01 09:09:11.047 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 04:09:11 np0005540697 nova_compute[189491]: 2025-12-01 09:09:11.128 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 04:09:11 np0005540697 nova_compute[189491]: 2025-12-01 09:09:11.128 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 04:09:11 np0005540697 nova_compute[189491]: 2025-12-01 09:09:11.128 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 04:09:11 np0005540697 python3.9[206932]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_metadata_agent'] executable=podman
Dec  1 04:09:11 np0005540697 nova_compute[189491]: 2025-12-01 09:09:11.417 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  1 04:09:11 np0005540697 nova_compute[189491]: 2025-12-01 09:09:11.418 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 04:09:11 np0005540697 nova_compute[189491]: 2025-12-01 09:09:11.419 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 04:09:11 np0005540697 nova_compute[189491]: 2025-12-01 09:09:11.419 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 04:09:11 np0005540697 nova_compute[189491]: 2025-12-01 09:09:11.419 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 04:09:11 np0005540697 nova_compute[189491]: 2025-12-01 09:09:11.419 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 04:09:11 np0005540697 nova_compute[189491]: 2025-12-01 09:09:11.420 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 04:09:11 np0005540697 nova_compute[189491]: 2025-12-01 09:09:11.450 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 04:09:11 np0005540697 nova_compute[189491]: 2025-12-01 09:09:11.450 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 04:09:11 np0005540697 nova_compute[189491]: 2025-12-01 09:09:11.451 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 04:09:11 np0005540697 nova_compute[189491]: 2025-12-01 09:09:11.451 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 04:09:11 np0005540697 nova_compute[189491]: 2025-12-01 09:09:11.597 189495 WARNING nova.virt.libvirt.driver [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 04:09:11 np0005540697 nova_compute[189491]: 2025-12-01 09:09:11.598 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5906MB free_disk=72.44344329833984GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 04:09:11 np0005540697 nova_compute[189491]: 2025-12-01 09:09:11.598 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 04:09:11 np0005540697 nova_compute[189491]: 2025-12-01 09:09:11.598 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 04:09:11 np0005540697 nova_compute[189491]: 2025-12-01 09:09:11.648 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 04:09:11 np0005540697 nova_compute[189491]: 2025-12-01 09:09:11.648 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 04:09:11 np0005540697 nova_compute[189491]: 2025-12-01 09:09:11.665 189495 DEBUG nova.compute.provider_tree [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Inventory has not changed in ProviderTree for provider: 143c7fe7-af1f-477a-978c-6a994d785d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 04:09:11 np0005540697 nova_compute[189491]: 2025-12-01 09:09:11.726 189495 DEBUG nova.scheduler.client.report [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Inventory has not changed for provider 143c7fe7-af1f-477a-978c-6a994d785d98 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 04:09:11 np0005540697 nova_compute[189491]: 2025-12-01 09:09:11.727 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 04:09:11 np0005540697 nova_compute[189491]: 2025-12-01 09:09:11.728 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.130s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 04:09:12 np0005540697 nova_compute[189491]: 2025-12-01 09:09:12.021 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 04:09:12 np0005540697 nova_compute[189491]: 2025-12-01 09:09:12.022 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 04:09:12 np0005540697 nova_compute[189491]: 2025-12-01 09:09:12.022 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 04:09:12 np0005540697 python3.9[207097]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  1 04:09:12 np0005540697 systemd[1]: Started libpod-conmon-f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed.scope.
Dec  1 04:09:12 np0005540697 podman[207098]: 2025-12-01 09:09:12.257066693 +0000 UTC m=+0.067034388 container exec f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec  1 04:09:12 np0005540697 podman[207117]: 2025-12-01 09:09:12.321167961 +0000 UTC m=+0.051751938 container exec_died f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec  1 04:09:12 np0005540697 podman[207098]: 2025-12-01 09:09:12.327573497 +0000 UTC m=+0.137541172 container exec_died f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125)
Dec  1 04:09:12 np0005540697 systemd[1]: libpod-conmon-f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed.scope: Deactivated successfully.
Dec  1 04:09:13 np0005540697 python3.9[207282]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  1 04:09:13 np0005540697 systemd[1]: Started libpod-conmon-f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed.scope.
Dec  1 04:09:13 np0005540697 podman[207285]: 2025-12-01 09:09:13.476194681 +0000 UTC m=+0.415018343 container exec f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  1 04:09:13 np0005540697 podman[207285]: 2025-12-01 09:09:13.511434057 +0000 UTC m=+0.450257699 container exec_died f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team)
Dec  1 04:09:13 np0005540697 systemd[1]: libpod-conmon-f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed.scope: Deactivated successfully.
Dec  1 04:09:14 np0005540697 python3.9[207472]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_metadata_agent recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:09:15 np0005540697 python3.9[207624]: ansible-containers.podman.podman_container_info Invoked with name=['multipathd'] executable=podman
Dec  1 04:09:16 np0005540697 python3.9[207789]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=multipathd detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  1 04:09:16 np0005540697 systemd[1]: Started libpod-conmon-5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed.scope.
Dec  1 04:09:16 np0005540697 podman[207790]: 2025-12-01 09:09:16.363978716 +0000 UTC m=+0.207624006 container exec 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  1 04:09:16 np0005540697 podman[207790]: 2025-12-01 09:09:16.401645151 +0000 UTC m=+0.245290481 container exec_died 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=multipathd, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  1 04:09:16 np0005540697 systemd[1]: libpod-conmon-5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed.scope: Deactivated successfully.
Dec  1 04:09:17 np0005540697 python3.9[207973]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=multipathd detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  1 04:09:17 np0005540697 systemd[1]: Started libpod-conmon-5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed.scope.
Dec  1 04:09:17 np0005540697 podman[207974]: 2025-12-01 09:09:17.2821252 +0000 UTC m=+0.086483771 container exec 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec  1 04:09:17 np0005540697 podman[207994]: 2025-12-01 09:09:17.345204134 +0000 UTC m=+0.050856387 container exec_died 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  1 04:09:17 np0005540697 podman[207974]: 2025-12-01 09:09:17.351038045 +0000 UTC m=+0.155396586 container exec_died 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  1 04:09:17 np0005540697 systemd[1]: libpod-conmon-5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed.scope: Deactivated successfully.
Dec  1 04:09:18 np0005540697 podman[208130]: 2025-12-01 09:09:18.077925254 +0000 UTC m=+0.060031880 container health_status 6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  1 04:09:18 np0005540697 python3.9[208181]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/multipathd recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:09:19 np0005540697 python3.9[208333]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_compute'] executable=podman
Dec  1 04:09:19 np0005540697 podman[208470]: 2025-12-01 09:09:19.646698535 +0000 UTC m=+0.057110039 container health_status ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=unhealthy, health_failing_streak=3, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image)
Dec  1 04:09:19 np0005540697 systemd[1]: ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c-7e57991bcbe4a900.service: Main process exited, code=exited, status=1/FAILURE
Dec  1 04:09:19 np0005540697 systemd[1]: ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c-7e57991bcbe4a900.service: Failed with result 'exit-code'.
Dec  1 04:09:19 np0005540697 python3.9[208518]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  1 04:09:20 np0005540697 systemd[1]: Started libpod-conmon-ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c.scope.
Dec  1 04:09:20 np0005540697 podman[208520]: 2025-12-01 09:09:20.108479053 +0000 UTC m=+0.186656435 container exec ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_id=edpm, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Dec  1 04:09:20 np0005540697 podman[208520]: 2025-12-01 09:09:20.14334712 +0000 UTC m=+0.221524492 container exec_died ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm)
Dec  1 04:09:20 np0005540697 systemd[1]: libpod-conmon-ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c.scope: Deactivated successfully.
Dec  1 04:09:20 np0005540697 python3.9[208701]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  1 04:09:20 np0005540697 systemd[1]: Started libpod-conmon-ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c.scope.
Dec  1 04:09:21 np0005540697 podman[208702]: 2025-12-01 09:09:21.01766001 +0000 UTC m=+0.091204646 container exec ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20251125)
Dec  1 04:09:21 np0005540697 podman[208702]: 2025-12-01 09:09:21.049878733 +0000 UTC m=+0.123423369 container exec_died ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS)
Dec  1 04:09:21 np0005540697 systemd[1]: libpod-conmon-ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c.scope: Deactivated successfully.
Dec  1 04:09:22 np0005540697 python3.9[208884]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_compute recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:09:22 np0005540697 python3.9[209036]: ansible-containers.podman.podman_container_info Invoked with name=['node_exporter'] executable=podman
Dec  1 04:09:23 np0005540697 python3.9[209202]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  1 04:09:23 np0005540697 systemd[1]: Started libpod-conmon-dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30.scope.
Dec  1 04:09:23 np0005540697 podman[209203]: 2025-12-01 09:09:23.810775635 +0000 UTC m=+0.102216304 container exec dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  1 04:09:23 np0005540697 podman[209203]: 2025-12-01 09:09:23.847344174 +0000 UTC m=+0.138784743 container exec_died dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  1 04:09:23 np0005540697 systemd[1]: libpod-conmon-dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30.scope: Deactivated successfully.
Dec  1 04:09:24 np0005540697 python3.9[209386]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  1 04:09:24 np0005540697 systemd[1]: Started libpod-conmon-dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30.scope.
Dec  1 04:09:24 np0005540697 podman[209387]: 2025-12-01 09:09:24.941830853 +0000 UTC m=+0.218040008 container exec dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 04:09:24 np0005540697 podman[209387]: 2025-12-01 09:09:24.97833003 +0000 UTC m=+0.254539185 container exec_died dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  1 04:09:25 np0005540697 systemd[1]: libpod-conmon-dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30.scope: Deactivated successfully.
Dec  1 04:09:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 09:09:26.490 106659 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 04:09:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 09:09:26.491 106659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 04:09:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 09:09:26.491 106659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 04:09:26 np0005540697 python3.9[209569]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/node_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:09:27 np0005540697 python3.9[209721]: ansible-containers.podman.podman_container_info Invoked with name=['podman_exporter'] executable=podman
Dec  1 04:09:28 np0005540697 python3.9[209888]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  1 04:09:28 np0005540697 systemd[1]: Started libpod-conmon-6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62.scope.
Dec  1 04:09:28 np0005540697 podman[209889]: 2025-12-01 09:09:28.97525999 +0000 UTC m=+0.482741208 container exec 6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  1 04:09:29 np0005540697 podman[209889]: 2025-12-01 09:09:29.008397926 +0000 UTC m=+0.515879124 container exec_died 6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  1 04:09:29 np0005540697 systemd[1]: libpod-conmon-6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62.scope: Deactivated successfully.
Dec  1 04:09:29 np0005540697 python3.9[210076]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  1 04:09:29 np0005540697 systemd[1]: Started libpod-conmon-6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62.scope.
Dec  1 04:09:29 np0005540697 podman[210077]: 2025-12-01 09:09:29.832131417 +0000 UTC m=+0.071965900 container exec 6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  1 04:09:29 np0005540697 podman[210077]: 2025-12-01 09:09:29.866502951 +0000 UTC m=+0.106337484 container exec_died 6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  1 04:09:29 np0005540697 systemd[1]: libpod-conmon-6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62.scope: Deactivated successfully.
Dec  1 04:09:30 np0005540697 podman[210232]: 2025-12-01 09:09:30.465268198 +0000 UTC m=+0.071826976 container health_status dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  1 04:09:30 np0005540697 python3.9[210284]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/podman_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:09:31 np0005540697 python3.9[210436]: ansible-containers.podman.podman_container_info Invoked with name=['openstack_network_exporter'] executable=podman
Dec  1 04:09:32 np0005540697 python3.9[210601]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  1 04:09:32 np0005540697 systemd[1]: Started libpod-conmon-110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0.scope.
Dec  1 04:09:32 np0005540697 podman[210602]: 2025-12-01 09:09:32.313793305 +0000 UTC m=+0.100555214 container exec 110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, io.openshift.expose-services=, vcs-type=git, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., architecture=x86_64, distribution-scope=public, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6)
Dec  1 04:09:32 np0005540697 podman[210602]: 2025-12-01 09:09:32.348846267 +0000 UTC m=+0.135608146 container exec_died 110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, io.openshift.expose-services=, vcs-type=git, distribution-scope=public, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, vendor=Red Hat, Inc., architecture=x86_64, release=1755695350, managed_by=edpm_ansible)
Dec  1 04:09:32 np0005540697 systemd[1]: libpod-conmon-110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0.scope: Deactivated successfully.
Dec  1 04:09:33 np0005540697 python3.9[210784]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  1 04:09:33 np0005540697 systemd[1]: Started libpod-conmon-110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0.scope.
Dec  1 04:09:33 np0005540697 podman[210785]: 2025-12-01 09:09:33.529087979 +0000 UTC m=+0.425263491 container exec 110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, config_id=edpm, maintainer=Red Hat, Inc., vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, distribution-scope=public)
Dec  1 04:09:33 np0005540697 podman[210785]: 2025-12-01 09:09:33.845120767 +0000 UTC m=+0.741296249 container exec_died 110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, vendor=Red Hat, Inc., version=9.6, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, distribution-scope=public, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, architecture=x86_64, config_id=edpm)
Dec  1 04:09:33 np0005540697 systemd[1]: libpod-conmon-110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0.scope: Deactivated successfully.
Dec  1 04:09:34 np0005540697 podman[210939]: 2025-12-01 09:09:34.375899531 +0000 UTC m=+0.091071173 container health_status f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 04:09:34 np0005540697 python3.9[210986]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/openstack_network_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:09:35 np0005540697 python3.9[211138]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall/ state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:09:35 np0005540697 podman[211223]: 2025-12-01 09:09:35.675696227 +0000 UTC m=+0.054364241 container health_status 110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, io.buildah.version=1.33.7, architecture=x86_64, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, release=1755695350, vcs-type=git, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Dec  1 04:09:35 np0005540697 python3.9[211311]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/telemetry.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:09:36 np0005540697 python3.9[211434]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/telemetry.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764580175.516052-1082-113250291574826/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=d942d984493b214bda2913f753ff68cdcedff00e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:09:37 np0005540697 python3.9[211586]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:09:37 np0005540697 podman[211710]: 2025-12-01 09:09:37.610872449 +0000 UTC m=+0.058129587 container health_status 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Dec  1 04:09:37 np0005540697 podman[211711]: 2025-12-01 09:09:37.638078135 +0000 UTC m=+0.080094244 container health_status 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=ovn_controller, org.label-schema.license=GPLv2)
Dec  1 04:09:37 np0005540697 python3.9[211777]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:09:38 np0005540697 python3.9[211858]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:09:38 np0005540697 python3.9[212010]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:09:39 np0005540697 python3.9[212088]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.roo_gvwm recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:09:39 np0005540697 python3.9[212240]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:09:40 np0005540697 python3.9[212318]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:09:41 np0005540697 python3.9[212470]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:09:41 np0005540697 python3[212623]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec  1 04:09:42 np0005540697 python3.9[212775]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:09:42 np0005540697 python3.9[212853]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:09:43 np0005540697 python3.9[213005]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:09:44 np0005540697 python3.9[213083]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:09:45 np0005540697 python3.9[213235]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:09:45 np0005540697 python3.9[213313]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:09:46 np0005540697 python3.9[213465]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:09:47 np0005540697 python3.9[213543]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:09:47 np0005540697 python3.9[213695]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:09:48 np0005540697 podman[213792]: 2025-12-01 09:09:48.283269951 +0000 UTC m=+0.076028155 container health_status 6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  1 04:09:48 np0005540697 python3.9[213833]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764580187.2823415-1207-19729932649330/.source.nft follow=False _original_basename=ruleset.j2 checksum=fb3275eced3a2e06312143189928124e1b2df34a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:09:49 np0005540697 python3.9[213996]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:09:49 np0005540697 python3.9[214148]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:09:50 np0005540697 podman[214275]: 2025-12-01 09:09:50.536711405 +0000 UTC m=+0.069908776 container health_status ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec  1 04:09:50 np0005540697 python3.9[214321]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:09:51 np0005540697 python3.9[214474]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:09:52 np0005540697 python3.9[214627]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 04:09:52 np0005540697 python3.9[214781]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:09:53 np0005540697 python3.9[214936]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:09:54 np0005540697 systemd[1]: session-25.scope: Deactivated successfully.
Dec  1 04:09:54 np0005540697 systemd[1]: session-25.scope: Consumed 1min 49.407s CPU time.
Dec  1 04:09:54 np0005540697 systemd-logind[792]: Session 25 logged out. Waiting for processes to exit.
Dec  1 04:09:54 np0005540697 systemd-logind[792]: Removed session 25.
Dec  1 04:09:59 np0005540697 podman[203700]: time="2025-12-01T09:09:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 04:09:59 np0005540697 podman[203700]: @ - - [01/Dec/2025:09:09:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 22543 "" "Go-http-client/1.1"
Dec  1 04:09:59 np0005540697 podman[203700]: @ - - [01/Dec/2025:09:09:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3404 "" "Go-http-client/1.1"
Dec  1 04:09:59 np0005540697 systemd-logind[792]: New session 26 of user zuul.
Dec  1 04:09:59 np0005540697 systemd[1]: Started Session 26 of User zuul.
Dec  1 04:10:00 np0005540697 podman[215090]: 2025-12-01 09:10:00.616594302 +0000 UTC m=+0.076535288 container health_status dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  1 04:10:00 np0005540697 python3.9[215134]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  1 04:10:00 np0005540697 systemd[1]: Reloading.
Dec  1 04:10:01 np0005540697 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 04:10:01 np0005540697 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 04:10:01 np0005540697 openstack_network_exporter[205866]: ERROR   09:10:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 04:10:01 np0005540697 openstack_network_exporter[205866]: ERROR   09:10:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 04:10:01 np0005540697 openstack_network_exporter[205866]: ERROR   09:10:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 04:10:01 np0005540697 openstack_network_exporter[205866]: ERROR   09:10:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 04:10:01 np0005540697 openstack_network_exporter[205866]: 
Dec  1 04:10:01 np0005540697 openstack_network_exporter[205866]: ERROR   09:10:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 04:10:01 np0005540697 openstack_network_exporter[205866]: 
Dec  1 04:10:02 np0005540697 python3.9[215331]: ansible-ansible.builtin.service_facts Invoked
Dec  1 04:10:02 np0005540697 network[215348]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  1 04:10:02 np0005540697 network[215349]: 'network-scripts' will be removed from distribution in near future.
Dec  1 04:10:02 np0005540697 network[215350]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  1 04:10:04 np0005540697 podman[215387]: 2025-12-01 09:10:04.720637892 +0000 UTC m=+0.091474805 container health_status f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec  1 04:10:05 np0005540697 podman[215450]: 2025-12-01 09:10:05.847018299 +0000 UTC m=+0.098307171 container health_status 110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, io.openshift.expose-services=, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.33.7, architecture=x86_64, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, release=1755695350, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Dec  1 04:10:07 np0005540697 podman[215663]: 2025-12-01 09:10:07.770715158 +0000 UTC m=+0.093462113 container health_status 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, container_name=multipathd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec  1 04:10:07 np0005540697 podman[215664]: 2025-12-01 09:10:07.802217271 +0000 UTC m=+0.123618293 container health_status 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 04:10:07 np0005540697 python3.9[215665]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_ipmi.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 04:10:09 np0005540697 python3.9[215860]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_ipmi.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:10:09 np0005540697 nova_compute[189491]: 2025-12-01 09:10:09.714 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 04:10:09 np0005540697 nova_compute[189491]: 2025-12-01 09:10:09.714 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 04:10:09 np0005540697 nova_compute[189491]: 2025-12-01 09:10:09.715 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 04:10:09 np0005540697 nova_compute[189491]: 2025-12-01 09:10:09.741 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 04:10:09 np0005540697 nova_compute[189491]: 2025-12-01 09:10:09.741 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 04:10:09 np0005540697 nova_compute[189491]: 2025-12-01 09:10:09.742 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 04:10:09 np0005540697 nova_compute[189491]: 2025-12-01 09:10:09.742 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 04:10:09 np0005540697 python3.9[216012]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_ipmi.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:10:09 np0005540697 nova_compute[189491]: 2025-12-01 09:10:09.909 189495 WARNING nova.virt.libvirt.driver [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 04:10:09 np0005540697 nova_compute[189491]: 2025-12-01 09:10:09.910 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5892MB free_disk=72.4430046081543GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 04:10:09 np0005540697 nova_compute[189491]: 2025-12-01 09:10:09.911 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 04:10:09 np0005540697 nova_compute[189491]: 2025-12-01 09:10:09.911 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 04:10:09 np0005540697 nova_compute[189491]: 2025-12-01 09:10:09.971 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 04:10:09 np0005540697 nova_compute[189491]: 2025-12-01 09:10:09.971 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 04:10:09 np0005540697 nova_compute[189491]: 2025-12-01 09:10:09.991 189495 DEBUG nova.compute.provider_tree [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Inventory has not changed in ProviderTree for provider: 143c7fe7-af1f-477a-978c-6a994d785d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 04:10:10 np0005540697 nova_compute[189491]: 2025-12-01 09:10:10.007 189495 DEBUG nova.scheduler.client.report [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Inventory has not changed for provider 143c7fe7-af1f-477a-978c-6a994d785d98 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 04:10:10 np0005540697 nova_compute[189491]: 2025-12-01 09:10:10.009 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 04:10:10 np0005540697 nova_compute[189491]: 2025-12-01 09:10:10.009 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.098s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 04:10:10 np0005540697 python3.9[216164]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:10:11 np0005540697 nova_compute[189491]: 2025-12-01 09:10:11.008 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 04:10:11 np0005540697 python3.9[216316]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec  1 04:10:11 np0005540697 nova_compute[189491]: 2025-12-01 09:10:11.713 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 04:10:11 np0005540697 nova_compute[189491]: 2025-12-01 09:10:11.714 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 04:10:11 np0005540697 nova_compute[189491]: 2025-12-01 09:10:11.714 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 04:10:11 np0005540697 nova_compute[189491]: 2025-12-01 09:10:11.730 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  1 04:10:11 np0005540697 nova_compute[189491]: 2025-12-01 09:10:11.731 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 04:10:11 np0005540697 nova_compute[189491]: 2025-12-01 09:10:11.732 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 04:10:12 np0005540697 python3.9[216468]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  1 04:10:12 np0005540697 systemd[1]: Reloading.
Dec  1 04:10:12 np0005540697 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 04:10:12 np0005540697 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 04:10:12 np0005540697 nova_compute[189491]: 2025-12-01 09:10:12.713 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 04:10:12 np0005540697 nova_compute[189491]: 2025-12-01 09:10:12.714 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 04:10:13 np0005540697 nova_compute[189491]: 2025-12-01 09:10:13.709 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 04:10:13 np0005540697 python3.9[216655]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_ipmi.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:10:14 np0005540697 python3.9[216808]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/openstack/config/telemetry-power-monitoring recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 04:10:15 np0005540697 python3.9[216958]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 04:10:16 np0005540697 python3.9[217110]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:10:17 np0005540697 python3.9[217231]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764580215.917144-125-106785862235233/.source.conf follow=False _original_basename=ceilometer-host-specific.conf.j2 checksum=e86e0e43000ce9ccfe5aefbf8e8f2e3d15d05584 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  1 04:10:18 np0005540697 python3.9[217383]: ansible-ansible.builtin.getent Invoked with database=passwd key=ceilometer fail_key=True service=None split=None
Dec  1 04:10:18 np0005540697 podman[217409]: 2025-12-01 09:10:18.735348404 +0000 UTC m=+0.105432566 container health_status 6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  1 04:10:19 np0005540697 python3.9[217560]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:10:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:10:19.775 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  1 04:10:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:10:19.777 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  1 04:10:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:10:19.777 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b020>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 04:10:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:10:19.778 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7ff84c98b0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 04:10:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:10:19.778 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84dc55100>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 04:10:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:10:19.779 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 04:10:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:10:19.779 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 04:10:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:10:19.780 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 04:10:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:10:19.780 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 04:10:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:10:19.781 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7ff8501e1d00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 04:10:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:10:19.781 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 04:10:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:10:19.781 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7ff84c98b110>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 04:10:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:10:19.781 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 04:10:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:10:19.781 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7ff84c98b1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 04:10:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:10:19.781 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 04:10:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:10:19.782 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7ff84c98b200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 04:10:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:10:19.782 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 04:10:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:10:19.780 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84ca1c260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{'disk.device.read.bytes': [], 'disk.device.allocation': [], 'disk.device.read.latency': [], 'disk.device.usage': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 04:10:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:10:19.782 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{'disk.device.read.bytes': [], 'disk.device.allocation': [], 'disk.device.read.latency': [], 'disk.device.usage': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 04:10:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:10:19.782 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7ff84ca1c230>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 04:10:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:10:19.783 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 04:10:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:10:19.783 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7ff84c98b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 04:10:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:10:19.783 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 04:10:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:10:19.783 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{'disk.device.read.bytes': [], 'disk.device.allocation': [], 'disk.device.read.latency': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 04:10:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:10:19.784 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84ff01af0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{'disk.device.read.bytes': [], 'disk.device.allocation': [], 'disk.device.read.latency': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 04:10:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:10:19.784 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7ff84c98b2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 04:10:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:10:19.785 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 04:10:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:10:19.784 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bb30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{'disk.device.read.bytes': [], 'disk.device.allocation': [], 'disk.device.read.latency': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 04:10:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:10:19.785 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7ff84c98b620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 04:10:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:10:19.786 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 04:10:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:10:19.785 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{'disk.device.read.bytes': [], 'disk.device.allocation': [], 'disk.device.read.latency': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 04:10:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:10:19.786 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7ff84c98b680>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 04:10:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:10:19.786 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 04:10:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:10:19.787 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7ff84c98b320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 04:10:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:10:19.787 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 04:10:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:10:19.786 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bb60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{'disk.device.read.bytes': [], 'disk.device.allocation': [], 'disk.device.read.latency': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.bytes': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 04:10:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:10:19.787 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{'disk.device.read.bytes': [], 'disk.device.allocation': [], 'disk.device.read.latency': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.bytes': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 04:10:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:10:19.788 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7ff84c98b920>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 04:10:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:10:19.788 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 04:10:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:10:19.788 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bbc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{'disk.device.read.bytes': [], 'disk.device.allocation': [], 'disk.device.read.latency': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.bytes': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 04:10:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:10:19.788 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7ff84c98b380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 04:10:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:10:19.789 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 04:10:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:10:19.789 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7ff84c98bb90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 04:10:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:10:19.789 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 04:10:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:10:19.789 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{'disk.device.read.bytes': [], 'disk.device.allocation': [], 'disk.device.read.latency': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.bytes': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 04:10:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:10:19.790 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{'disk.device.read.bytes': [], 'disk.device.allocation': [], 'disk.device.read.latency': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.bytes': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 04:10:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:10:19.790 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7ff84c98bbf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 04:10:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:10:19.790 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 04:10:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:10:19.791 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7ff84c98bc80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 04:10:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:10:19.791 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 04:10:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:10:19.790 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{'disk.device.read.bytes': [], 'disk.device.allocation': [], 'disk.device.read.latency': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.bytes': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 04:10:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:10:19.791 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{'disk.device.read.bytes': [], 'disk.device.allocation': [], 'disk.device.read.latency': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.bytes': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 04:10:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:10:19.792 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7ff84c98bd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 04:10:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:10:19.792 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b5f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{'disk.device.read.bytes': [], 'disk.device.allocation': [], 'disk.device.read.latency': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.bytes': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 04:10:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:10:19.792 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 04:10:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:10:19.792 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{'disk.device.read.bytes': [], 'disk.device.allocation': [], 'disk.device.read.latency': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.bytes': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 04:10:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:10:19.793 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7ff84c98bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 04:10:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:10:19.793 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 04:10:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:10:19.793 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98be60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{'disk.device.read.bytes': [], 'disk.device.allocation': [], 'disk.device.read.latency': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.bytes': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 04:10:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:10:19.793 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7ff84c98b5c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 04:10:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:10:19.794 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 04:10:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:10:19.794 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7ff84dc55040>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 04:10:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:10:19.795 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 04:10:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:10:19.794 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84f216690>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{'disk.device.read.bytes': [], 'disk.device.allocation': [], 'disk.device.read.latency': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.bytes': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'memory.usage': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 04:10:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:10:19.795 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7ff84c98be30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 04:10:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:10:19.795 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 04:10:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:10:19.795 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c9896d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{'disk.device.read.bytes': [], 'disk.device.allocation': [], 'disk.device.read.latency': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.bytes': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 04:10:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:10:19.795 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7ff8503b1b80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 04:10:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:10:19.796 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 04:10:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:10:19.796 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{'disk.device.read.bytes': [], 'disk.device.allocation': [], 'disk.device.read.latency': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.bytes': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 04:10:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:10:19.796 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7ff84dab3f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 04:10:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:10:19.797 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 04:10:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:10:19.797 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7ff84c98bec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 04:10:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:10:19.797 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 04:10:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:10:19.797 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98a720>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{'disk.device.read.bytes': [], 'disk.device.allocation': [], 'disk.device.read.latency': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.bytes': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets': [], 'cpu': [], 'disk.device.capacity': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 04:10:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:10:19.798 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{'disk.device.read.bytes': [], 'disk.device.allocation': [], 'disk.device.read.latency': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.bytes': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets': [], 'cpu': [], 'disk.device.capacity': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 04:10:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:10:19.798 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7ff84c98b170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 04:10:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:10:19.798 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 04:10:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:10:19.799 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7ff84c98bf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 04:10:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:10:19.799 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 04:10:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:10:19.799 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 04:10:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:10:19.799 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 04:10:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:10:19.800 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 04:10:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:10:19.800 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 04:10:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:10:19.800 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 04:10:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:10:19.800 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 04:10:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:10:19.800 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 04:10:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:10:19.801 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 04:10:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:10:19.801 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 04:10:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:10:19.801 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 04:10:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:10:19.801 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 04:10:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:10:19.802 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 04:10:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:10:19.802 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 04:10:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:10:19.802 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 04:10:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:10:19.802 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 04:10:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:10:19.802 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 04:10:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:10:19.803 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 04:10:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:10:19.803 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 04:10:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:10:19.803 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 04:10:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:10:19.803 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 04:10:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:10:19.804 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 04:10:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:10:19.804 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 04:10:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:10:19.804 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 04:10:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:10:19.804 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 04:10:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:10:19.805 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 04:10:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:10:19.805 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 04:10:20 np0005540697 python3.9[217682]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764580218.9929833-171-195465518173660/.source.conf _original_basename=ceilometer.conf follow=False checksum=e93ef84feaa07737af66c0c1da2fd4bdcae81d37 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:10:20 np0005540697 python3.9[217832]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/polling.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:10:20 np0005540697 podman[217833]: 2025-12-01 09:10:20.693900087 +0000 UTC m=+0.065888146 container health_status ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=edpm, org.label-schema.build-date=20251125, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, container_name=ceilometer_agent_compute)
Dec  1 04:10:21 np0005540697 python3.9[217973]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/polling.yaml mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764580220.2167277-171-36129347461558/.source.yaml _original_basename=polling.yaml follow=False checksum=5ef7021082c6431099dde63e021011029cd65119 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:10:21 np0005540697 python3.9[218125]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/custom.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:10:22 np0005540697 python3.9[218246]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/custom.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764580221.3796117-171-248562309024538/.source.conf _original_basename=custom.conf follow=False checksum=838b8b0a7d7f72e55ab67d39f32e3cb3eca2139b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:10:23 np0005540697 python3.9[218396]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 04:10:23 np0005540697 python3.9[218549]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 04:10:24 np0005540697 python3.9[218701]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:10:25 np0005540697 python3.9[218822]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764580224.1213706-230-229488888585605/.source.json follow=False _original_basename=ceilometer-agent-ipmi.json.j2 checksum=21255e7f7db3155b4a491729298d9407fe6f8335 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:10:25 np0005540697 python3.9[218972]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:10:26 np0005540697 python3.9[219048]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf _original_basename=ceilometer-host-specific.conf.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:10:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 09:10:26.491 106659 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 04:10:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 09:10:26.492 106659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 04:10:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 09:10:26.493 106659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 04:10:27 np0005540697 python3.9[219198]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_agent_ipmi.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:10:27 np0005540697 python3.9[219319]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_agent_ipmi.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764580226.5426345-230-184372404700455/.source.json follow=False _original_basename=ceilometer_agent_ipmi.json.j2 checksum=cf81874b7544c057599ec397442879f74d42b3ec backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:10:28 np0005540697 python3.9[219469]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:10:29 np0005540697 python3.9[219590]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764580227.9333496-230-242407910355235/.source.yaml follow=False _original_basename=ceilometer_prom_exporter.yaml.j2 checksum=10157c879411ee6023e506dc85a343cedc52700f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:10:29 np0005540697 python3.9[219740]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/firewall.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:10:29 np0005540697 podman[203700]: time="2025-12-01T09:10:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 04:10:29 np0005540697 podman[203700]: @ - - [01/Dec/2025:09:10:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 22543 "" "Go-http-client/1.1"
Dec  1 04:10:29 np0005540697 podman[203700]: @ - - [01/Dec/2025:09:10:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3412 "" "Go-http-client/1.1"
Dec  1 04:10:30 np0005540697 python3.9[219861]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/firewall.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764580229.1690989-230-140149315586296/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=40b8960d32c81de936cddbeb137a8240ecc54e7b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:10:30 np0005540697 podman[219985]: 2025-12-01 09:10:30.84961093 +0000 UTC m=+0.084139464 container health_status dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 04:10:31 np0005540697 python3.9[220024]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/kepler.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:10:31 np0005540697 openstack_network_exporter[205866]: ERROR   09:10:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 04:10:31 np0005540697 openstack_network_exporter[205866]: ERROR   09:10:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 04:10:31 np0005540697 openstack_network_exporter[205866]: ERROR   09:10:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 04:10:31 np0005540697 openstack_network_exporter[205866]: ERROR   09:10:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 04:10:31 np0005540697 openstack_network_exporter[205866]: 
Dec  1 04:10:31 np0005540697 openstack_network_exporter[205866]: ERROR   09:10:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 04:10:31 np0005540697 openstack_network_exporter[205866]: 
Dec  1 04:10:31 np0005540697 python3.9[220154]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/kepler.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764580230.4699423-230-243149848780470/.source.json follow=False _original_basename=kepler.json.j2 checksum=89451093c8765edd3915016a9e87770fe489178d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:10:32 np0005540697 python3.9[220304]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:10:33 np0005540697 python3.9[220380]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml _original_basename=ceilometer_prom_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:10:33 np0005540697 python3.9[220532]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:10:34 np0005540697 python3.9[220684]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:10:35 np0005540697 podman[220808]: 2025-12-01 09:10:35.143438724 +0000 UTC m=+0.052383155 container health_status f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 04:10:35 np0005540697 python3.9[220853]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 04:10:36 np0005540697 podman[221005]: 2025-12-01 09:10:36.031209041 +0000 UTC m=+0.094033216 container health_status 110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release=1755695350, distribution-scope=public, build-date=2025-08-20T13:12:41, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., name=ubi9-minimal, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, vcs-type=git, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec  1 04:10:36 np0005540697 python3.9[221006]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:10:36 np0005540697 python3.9[221149]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764580235.6091046-349-121138088531474/.source _original_basename=healthcheck follow=False checksum=ebb343c21fce35a02591a9351660cb7035a47d42 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  1 04:10:37 np0005540697 python3.9[221225]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/healthcheck.future follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:10:37 np0005540697 podman[221348]: 2025-12-01 09:10:37.876356224 +0000 UTC m=+0.050325395 container health_status 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible)
Dec  1 04:10:37 np0005540697 podman[221349]: 2025-12-01 09:10:37.911223729 +0000 UTC m=+0.074646701 container health_status 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible)
Dec  1 04:10:38 np0005540697 python3.9[221355]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764580235.6091046-349-121138088531474/.source.future _original_basename=healthcheck.future follow=False checksum=d500a98192f4ddd70b4dfdc059e2d81aed36a294 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  1 04:10:38 np0005540697 python3.9[221547]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/kepler/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:10:39 np0005540697 python3.9[221670]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/kepler/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764580238.2178106-349-40133721453913/.source _original_basename=healthcheck follow=False checksum=57ed53cc150174efd98819129660d5b9ea9ea61a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  1 04:10:40 np0005540697 python3.9[221822]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry-power-monitoring config_pattern=ceilometer_agent_ipmi.json debug=False
Dec  1 04:10:41 np0005540697 python3.9[221974]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  1 04:10:42 np0005540697 python3[222126]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry-power-monitoring config_id=edpm config_overrides={} config_patterns=ceilometer_agent_ipmi.json log_base_path=/var/log/containers/stdouts debug=False
Dec  1 04:10:42 np0005540697 podman[222165]: 2025-12-01 09:10:42.659509265 +0000 UTC m=+0.068383580 container create e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec  1 04:10:42 np0005540697 podman[222165]: 2025-12-01 09:10:42.624443394 +0000 UTC m=+0.033317739 image pull 24d4416455a3caf43088be1a1fdcd72d9680ad5e64ac2b338cb2cc50d15f5acc quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified
Dec  1 04:10:42 np0005540697 python3[222126]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ceilometer_agent_ipmi --conmon-pidfile /run/ceilometer_agent_ipmi.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env OS_ENDPOINT_TYPE=internal --healthcheck-command /openstack/healthcheck ipmi --label config_id=edpm --label container_name=ceilometer_agent_ipmi --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --security-opt label:type:ceilometer_polling_t --user ceilometer --volume /var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z --volume /var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z --volume /etc/hosts:/etc/hosts:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z --volume /dev/log:/dev/log --volume /var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified kolla_start
Dec  1 04:10:43 np0005540697 python3.9[222355]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 04:10:44 np0005540697 python3.9[222509]: ansible-file Invoked with path=/etc/systemd/system/edpm_ceilometer_agent_ipmi.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:10:45 np0005540697 python3.9[222660]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764580244.6718836-427-183105871979516/source dest=/etc/systemd/system/edpm_ceilometer_agent_ipmi.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:10:46 np0005540697 python3.9[222736]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  1 04:10:46 np0005540697 systemd[1]: Reloading.
Dec  1 04:10:46 np0005540697 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 04:10:46 np0005540697 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 04:10:47 np0005540697 python3.9[222849]: ansible-systemd Invoked with state=restarted name=edpm_ceilometer_agent_ipmi.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 04:10:47 np0005540697 systemd[1]: Reloading.
Dec  1 04:10:47 np0005540697 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 04:10:47 np0005540697 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 04:10:47 np0005540697 systemd[1]: Starting ceilometer_agent_ipmi container...
Dec  1 04:10:47 np0005540697 systemd[1]: Started libcrun container.
Dec  1 04:10:47 np0005540697 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f8a38993d6798196bb3607c474ecc4d5e0149b4d2bd5be34de9fd8d03d9a74e/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Dec  1 04:10:47 np0005540697 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f8a38993d6798196bb3607c474ecc4d5e0149b4d2bd5be34de9fd8d03d9a74e/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec  1 04:10:47 np0005540697 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f8a38993d6798196bb3607c474ecc4d5e0149b4d2bd5be34de9fd8d03d9a74e/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff)
Dec  1 04:10:47 np0005540697 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f8a38993d6798196bb3607c474ecc4d5e0149b4d2bd5be34de9fd8d03d9a74e/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Dec  1 04:10:47 np0005540697 systemd[1]: Started /usr/bin/podman healthcheck run e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f.
Dec  1 04:10:47 np0005540697 podman[222888]: 2025-12-01 09:10:47.829653546 +0000 UTC m=+0.172989918 container init e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_ipmi)
Dec  1 04:10:47 np0005540697 ceilometer_agent_ipmi[222903]: + sudo -E kolla_set_configs
Dec  1 04:10:47 np0005540697 podman[222888]: 2025-12-01 09:10:47.87089798 +0000 UTC m=+0.214234302 container start e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec  1 04:10:47 np0005540697 podman[222888]: ceilometer_agent_ipmi
Dec  1 04:10:47 np0005540697 systemd[1]: Started ceilometer_agent_ipmi container.
Dec  1 04:10:47 np0005540697 ceilometer_agent_ipmi[222903]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec  1 04:10:47 np0005540697 ceilometer_agent_ipmi[222903]: INFO:__main__:Validating config file
Dec  1 04:10:47 np0005540697 ceilometer_agent_ipmi[222903]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec  1 04:10:47 np0005540697 ceilometer_agent_ipmi[222903]: INFO:__main__:Copying service configuration files
Dec  1 04:10:47 np0005540697 ceilometer_agent_ipmi[222903]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Dec  1 04:10:47 np0005540697 ceilometer_agent_ipmi[222903]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Dec  1 04:10:47 np0005540697 ceilometer_agent_ipmi[222903]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Dec  1 04:10:47 np0005540697 ceilometer_agent_ipmi[222903]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Dec  1 04:10:47 np0005540697 ceilometer_agent_ipmi[222903]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml
Dec  1 04:10:47 np0005540697 ceilometer_agent_ipmi[222903]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Dec  1 04:10:47 np0005540697 ceilometer_agent_ipmi[222903]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec  1 04:10:47 np0005540697 ceilometer_agent_ipmi[222903]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec  1 04:10:47 np0005540697 ceilometer_agent_ipmi[222903]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec  1 04:10:47 np0005540697 ceilometer_agent_ipmi[222903]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec  1 04:10:47 np0005540697 ceilometer_agent_ipmi[222903]: INFO:__main__:Writing out command to execute
Dec  1 04:10:47 np0005540697 ceilometer_agent_ipmi[222903]: ++ cat /run_command
Dec  1 04:10:47 np0005540697 ceilometer_agent_ipmi[222903]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Dec  1 04:10:47 np0005540697 ceilometer_agent_ipmi[222903]: + ARGS=
Dec  1 04:10:47 np0005540697 ceilometer_agent_ipmi[222903]: + sudo kolla_copy_cacerts
Dec  1 04:10:47 np0005540697 ceilometer_agent_ipmi[222903]: + [[ ! -n '' ]]
Dec  1 04:10:47 np0005540697 ceilometer_agent_ipmi[222903]: + . kolla_extend_start
Dec  1 04:10:47 np0005540697 ceilometer_agent_ipmi[222903]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Dec  1 04:10:47 np0005540697 ceilometer_agent_ipmi[222903]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'\'''
Dec  1 04:10:47 np0005540697 ceilometer_agent_ipmi[222903]: + umask 0022
Dec  1 04:10:47 np0005540697 ceilometer_agent_ipmi[222903]: + exec /usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout
Dec  1 04:10:48 np0005540697 podman[222910]: 2025-12-01 09:10:48.00655383 +0000 UTC m=+0.116496445 container health_status e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec  1 04:10:48 np0005540697 systemd[1]: e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f-7341fc6a74223a01.service: Main process exited, code=exited, status=1/FAILURE
Dec  1 04:10:48 np0005540697 systemd[1]: e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f-7341fc6a74223a01.service: Failed with result 'exit-code'.
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.821 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:40
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.822 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.822 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.822 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.823 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.823 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.823 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.823 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.823 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.824 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.824 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.824 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.824 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.825 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.825 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.825 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.825 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.825 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.826 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.826 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.826 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.826 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.826 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.826 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.827 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.827 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.827 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.827 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.827 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.828 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.828 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.828 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.828 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.828 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.828 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.829 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.829 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.829 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.829 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.829 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.830 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.830 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.830 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.830 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.830 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.831 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.831 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.831 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.831 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.831 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.831 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.832 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.832 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.832 2 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.832 2 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.832 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.833 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.833 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.833 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.833 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.833 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.834 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.834 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.834 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.834 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.834 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.835 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.835 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.835 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.835 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.835 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.836 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.836 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.836 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.836 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.836 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.836 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.837 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.837 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.837 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.837 2 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.837 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.838 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.838 2 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.838 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.838 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.838 2 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.839 2 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.839 2 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.839 2 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.839 2 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.840 2 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.840 2 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.840 2 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.840 2 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.840 2 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.841 2 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.841 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.841 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.841 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.842 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.842 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.842 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.842 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.842 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.843 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.843 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.843 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.843 2 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.844 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.844 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.844 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.845 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.845 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.845 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.845 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.845 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.846 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.846 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.846 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.846 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.846 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.847 2 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.847 2 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.847 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.847 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.847 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.848 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.848 2 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.848 2 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.848 2 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.848 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.849 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.849 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.849 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.849 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.849 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.849 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.850 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.850 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.850 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.850 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.850 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.851 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.851 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.851 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.851 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.851 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.851 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.852 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.852 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.852 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.852 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.852 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.853 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.853 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.853 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.853 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.853 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.854 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.854 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.854 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.854 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.854 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.854 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec  1 04:10:48 np0005540697 python3.9[223086]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry-power-monitoring config_pattern=kepler.json debug=False
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.874 12 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.875 12 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.876 12 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Dec  1 04:10:48 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:48.981 12 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'ceilometer-rootwrap', '/etc/ceilometer/rootwrap.conf', 'privsep-helper', '--privsep_context', 'ceilometer.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmp9e_6tpn2/privsep.sock']
Dec  1 04:10:49 np0005540697 podman[223218]: 2025-12-01 09:10:49.62052712 +0000 UTC m=+0.087345052 container health_status 6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.658 12 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.658 12 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp9e_6tpn2/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.536 19 INFO oslo.privsep.daemon [-] privsep daemon starting
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.542 19 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.545 19 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.546 19 INFO oslo.privsep.daemon [-] privsep daemon running as pid 19
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.764 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.current: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.765 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.fan: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.766 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.airflow: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.767 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cpu_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.767 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cups: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.767 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.io_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.768 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.mem_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.768 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.outlet_temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.768 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.power: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.768 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.769 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.temperature: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.769 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.voltage: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.769 12 WARNING ceilometer.polling.manager [-] No valid pollsters can be loaded from ['ipmi'] namespaces
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.774 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:48
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.775 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.775 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.775 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.775 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.775 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.776 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.776 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.776 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.776 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.776 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.777 12 DEBUG cotyledon.oslo_config_glue [-] control_exchange               = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.777 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.777 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.777 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.778 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.778 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.778 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.778 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.779 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.779 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.779 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.779 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.779 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.780 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.780 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.780 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.780 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.780 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.781 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.781 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.781 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.781 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.781 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.782 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.782 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.782 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.782 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.782 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.783 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.783 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.783 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.783 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:49 np0005540697 python3.9[223271]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.783 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.784 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.784 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.784 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.784 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.784 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.785 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.785 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.785 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.785 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.786 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.786 12 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.786 12 DEBUG cotyledon.oslo_config_glue [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.786 12 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.786 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.786 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.787 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.787 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.787 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.787 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.787 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.788 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.788 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.788 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.788 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.789 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.789 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.789 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.790 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.790 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.790 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.790 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.790 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.791 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.791 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.791 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.791 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.791 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.792 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.792 12 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.792 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.792 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.792 12 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.793 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.793 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.793 12 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.793 12 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.793 12 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.794 12 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.794 12 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.794 12 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.794 12 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.794 12 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.795 12 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.795 12 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.795 12 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.795 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.795 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.796 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.796 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.796 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.796 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.797 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.797 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.797 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.797 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.798 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.798 12 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.798 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.798 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.798 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.799 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.799 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.799 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.799 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.799 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.800 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.800 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.800 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.800 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.800 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.801 12 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.801 12 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.801 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.801 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.801 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.802 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.802 12 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.802 12 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.802 12 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.802 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.803 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.803 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.803 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.803 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.803 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.804 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.804 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.804 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.804 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.804 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.804 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.805 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.805 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.806 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.806 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.806 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.806 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.806 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.806 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.806 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.806 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.806 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.806 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.806 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.807 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.807 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.807 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.807 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.807 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.807 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.807 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.807 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.807 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.807 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.807 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.808 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.808 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.808 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.808 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.808 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.808 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.808 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.808 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.808 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.808 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.808 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.809 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.809 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.809 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.809 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.809 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.809 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.809 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.809 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.809 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.809 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.809 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.810 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.810 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.810 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.810 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.810 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.810 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.810 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.810 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.810 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.810 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.810 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.811 12 DEBUG cotyledon._service [-] Run service AgentManager(0) [12] wait_forever /usr/lib/python3.9/site-packages/cotyledon/_service.py:241
Dec  1 04:10:49 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:49.814 12 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['hardware.*']}]} load_config /usr/lib/python3.9/site-packages/ceilometer/agent.py:64
Dec  1 04:10:50 np0005540697 python3[223427]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry-power-monitoring config_id=edpm config_overrides={} config_patterns=kepler.json log_base_path=/var/log/containers/stdouts debug=False
Dec  1 04:10:51 np0005540697 podman[223466]: 2025-12-01 09:10:51.017647963 +0000 UTC m=+0.069262232 container create f2d8639519b361376780abb83cb5a5e341c4f412720fb5852b2a4d261a69c359 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, io.openshift.expose-services=, architecture=x86_64, distribution-scope=public, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, vcs-type=git, io.buildah.version=1.29.0, config_id=edpm, build-date=2024-09-18T21:23:30, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, release-0.7.12=, version=9.4, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container)
Dec  1 04:10:51 np0005540697 podman[223466]: 2025-12-01 09:10:50.977843553 +0000 UTC m=+0.029457892 image pull ed61e3ea3188391c18595d8ceada2a5a01f0ece915c62fde355798735b5208d7 quay.io/sustainable_computing_io/kepler:release-0.7.12
Dec  1 04:10:51 np0005540697 python3[223427]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name kepler --conmon-pidfile /run/kepler.pid --env ENABLE_GPU=true --env EXPOSE_CONTAINER_METRICS=true --env ENABLE_PROCESS_METRICS=true --env EXPOSE_VM_METRICS=true --env EXPOSE_ESTIMATED_IDLE_POWER_METRICS=false --env LIBVIRT_METADATA_URI=http://openstack.org/xmlns/libvirt/nova/1.1 --healthcheck-command /openstack/healthcheck kepler --label config_id=edpm --label container_name=kepler --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 8888:8888 --volume /lib/modules:/lib/modules:ro --volume /run/libvirt:/run/libvirt:shared,ro --volume /sys:/sys --volume /proc:/proc --volume /var/lib/openstack/healthchecks/kepler:/openstack:ro,z quay.io/sustainable_computing_io/kepler:release-0.7.12 -v=2
Dec  1 04:10:51 np0005540697 podman[223607]: 2025-12-01 09:10:51.711177609 +0000 UTC m=+0.078722786 container health_status ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Dec  1 04:10:51 np0005540697 python3.9[223676]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 04:10:52 np0005540697 python3.9[223830]: ansible-file Invoked with path=/etc/systemd/system/edpm_kepler.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:10:53 np0005540697 python3.9[223981]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764580252.7946978-489-31012805120509/source dest=/etc/systemd/system/edpm_kepler.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:10:54 np0005540697 python3.9[224057]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  1 04:10:54 np0005540697 systemd[1]: Reloading.
Dec  1 04:10:54 np0005540697 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 04:10:54 np0005540697 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 04:10:55 np0005540697 python3.9[224169]: ansible-systemd Invoked with state=restarted name=edpm_kepler.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 04:10:55 np0005540697 systemd[1]: Reloading.
Dec  1 04:10:55 np0005540697 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 04:10:55 np0005540697 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 04:10:55 np0005540697 systemd[1]: Starting kepler container...
Dec  1 04:10:55 np0005540697 systemd[1]: Started libcrun container.
Dec  1 04:10:55 np0005540697 systemd[1]: Started /usr/bin/podman healthcheck run f2d8639519b361376780abb83cb5a5e341c4f412720fb5852b2a4d261a69c359.
Dec  1 04:10:55 np0005540697 podman[224209]: 2025-12-01 09:10:55.855235504 +0000 UTC m=+0.138088382 container init f2d8639519b361376780abb83cb5a5e341c4f412720fb5852b2a4d261a69c359 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, io.openshift.expose-services=, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, managed_by=edpm_ansible, container_name=kepler, release=1214.1726694543, io.buildah.version=1.29.0, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., version=9.4, config_id=edpm, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public)
Dec  1 04:10:55 np0005540697 kepler[224224]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Dec  1 04:10:55 np0005540697 podman[224209]: 2025-12-01 09:10:55.883874365 +0000 UTC m=+0.166727193 container start f2d8639519b361376780abb83cb5a5e341c4f412720fb5852b2a4d261a69c359 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., version=9.4, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.expose-services=, release=1214.1726694543, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, build-date=2024-09-18T21:23:30)
Dec  1 04:10:55 np0005540697 kepler[224224]: I1201 09:10:55.889023       1 exporter.go:103] Kepler running on version: v0.7.12-dirty
Dec  1 04:10:55 np0005540697 kepler[224224]: I1201 09:10:55.889514       1 config.go:293] using gCgroup ID in the BPF program: true
Dec  1 04:10:55 np0005540697 kepler[224224]: I1201 09:10:55.889555       1 config.go:295] kernel version: 5.14
Dec  1 04:10:55 np0005540697 podman[224209]: kepler
Dec  1 04:10:55 np0005540697 kepler[224224]: I1201 09:10:55.890547       1 power.go:78] Unable to obtain power, use estimate method
Dec  1 04:10:55 np0005540697 kepler[224224]: I1201 09:10:55.890587       1 redfish.go:169] failed to get redfish credential file path
Dec  1 04:10:55 np0005540697 kepler[224224]: I1201 09:10:55.891208       1 acpi.go:71] Could not find any ACPI power meter path. Is it a VM?
Dec  1 04:10:55 np0005540697 kepler[224224]: I1201 09:10:55.891230       1 power.go:79] using none to obtain power
Dec  1 04:10:55 np0005540697 kepler[224224]: E1201 09:10:55.891249       1 accelerator.go:154] [DUMMY] doesn't contain GPU
Dec  1 04:10:55 np0005540697 kepler[224224]: E1201 09:10:55.891281       1 exporter.go:154] failed to init GPU accelerators: no devices found
Dec  1 04:10:55 np0005540697 kepler[224224]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Dec  1 04:10:55 np0005540697 kepler[224224]: I1201 09:10:55.894014       1 exporter.go:84] Number of CPUs: 8
Dec  1 04:10:55 np0005540697 systemd[1]: Started kepler container.
Dec  1 04:10:56 np0005540697 podman[224234]: 2025-12-01 09:10:56.026754904 +0000 UTC m=+0.130347529 container health_status f2d8639519b361376780abb83cb5a5e341c4f412720fb5852b2a4d261a69c359 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=starting, health_failing_streak=1, health_log=, com.redhat.component=ubi9-container, distribution-scope=public, architecture=x86_64, managed_by=edpm_ansible, config_id=edpm, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vendor=Red Hat, Inc., version=9.4, container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.buildah.version=1.29.0, release=1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, io.openshift.tags=base rhel9)
Dec  1 04:10:56 np0005540697 systemd[1]: f2d8639519b361376780abb83cb5a5e341c4f412720fb5852b2a4d261a69c359-6b355f8ed4d59184.service: Main process exited, code=exited, status=1/FAILURE
Dec  1 04:10:56 np0005540697 systemd[1]: f2d8639519b361376780abb83cb5a5e341c4f412720fb5852b2a4d261a69c359-6b355f8ed4d59184.service: Failed with result 'exit-code'.
Dec  1 04:10:56 np0005540697 kepler[224224]: I1201 09:10:56.481347       1 watcher.go:83] Using in cluster k8s config
Dec  1 04:10:56 np0005540697 kepler[224224]: I1201 09:10:56.481400       1 watcher.go:90] failed to get config: unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined
Dec  1 04:10:56 np0005540697 kepler[224224]: E1201 09:10:56.481645       1 manager.go:59] could not run the watcher k8s APIserver watcher was not enabled
Dec  1 04:10:56 np0005540697 kepler[224224]: I1201 09:10:56.486503       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_TOTAL Power
Dec  1 04:10:56 np0005540697 kepler[224224]: I1201 09:10:56.486545       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms]
Dec  1 04:10:56 np0005540697 kepler[224224]: I1201 09:10:56.491687       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_COMPONENTS Power
Dec  1 04:10:56 np0005540697 kepler[224224]: I1201 09:10:56.491716       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms bpf_cpu_time_ms bpf_cpu_time_ms   gpu_compute_util]
Dec  1 04:10:56 np0005540697 kepler[224224]: I1201 09:10:56.499764       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec  1 04:10:56 np0005540697 kepler[224224]: I1201 09:10:56.499794       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Dec  1 04:10:56 np0005540697 kepler[224224]: I1201 09:10:56.499808       1 node_platform_energy.go:53] Using the Regressor/AbsPower Power Model to estimate Node Platform Power
Dec  1 04:10:56 np0005540697 kepler[224224]: I1201 09:10:56.508947       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec  1 04:10:56 np0005540697 kepler[224224]: I1201 09:10:56.509003       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec  1 04:10:56 np0005540697 kepler[224224]: I1201 09:10:56.509009       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec  1 04:10:56 np0005540697 kepler[224224]: I1201 09:10:56.509018       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec  1 04:10:56 np0005540697 kepler[224224]: I1201 09:10:56.509027       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Dec  1 04:10:56 np0005540697 kepler[224224]: I1201 09:10:56.509040       1 node_component_energy.go:57] Using the Regressor/AbsPower Power Model to estimate Node Component Power
Dec  1 04:10:56 np0005540697 kepler[224224]: I1201 09:10:56.509100       1 prometheus_collector.go:90] Registered Process Prometheus metrics
Dec  1 04:10:56 np0005540697 kepler[224224]: I1201 09:10:56.509124       1 prometheus_collector.go:95] Registered Container Prometheus metrics
Dec  1 04:10:56 np0005540697 kepler[224224]: I1201 09:10:56.509142       1 prometheus_collector.go:100] Registered VM Prometheus metrics
Dec  1 04:10:56 np0005540697 kepler[224224]: I1201 09:10:56.509155       1 prometheus_collector.go:104] Registered Node Prometheus metrics
Dec  1 04:10:56 np0005540697 kepler[224224]: I1201 09:10:56.509253       1 exporter.go:194] starting to listen on 0.0.0.0:8888
Dec  1 04:10:56 np0005540697 kepler[224224]: I1201 09:10:56.509496       1 exporter.go:208] Started Kepler in 620.872771ms
Dec  1 04:10:56 np0005540697 python3.9[224406]: ansible-ansible.builtin.systemd Invoked with name=edpm_ceilometer_agent_ipmi.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  1 04:10:56 np0005540697 systemd[1]: Stopping ceilometer_agent_ipmi container...
Dec  1 04:10:56 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:56.883 2 INFO cotyledon._service_manager [-] Caught SIGTERM signal, graceful exiting of master process
Dec  1 04:10:56 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:56.985 2 DEBUG cotyledon._service_manager [-] Killing services with signal SIGTERM _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:304
Dec  1 04:10:56 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:56.985 2 DEBUG cotyledon._service_manager [-] Waiting services to terminate _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:308
Dec  1 04:10:56 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:56.986 12 INFO cotyledon._service [-] Caught SIGTERM signal, graceful exiting of service AgentManager(0) [12]
Dec  1 04:10:57 np0005540697 ceilometer_agent_ipmi[222903]: 2025-12-01 09:10:57.001 2 DEBUG cotyledon._service_manager [-] Shutdown finish _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:320
Dec  1 04:10:57 np0005540697 systemd[1]: libpod-e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f.scope: Deactivated successfully.
Dec  1 04:10:57 np0005540697 systemd[1]: libpod-e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f.scope: Consumed 2.224s CPU time.
Dec  1 04:10:57 np0005540697 conmon[222903]: conmon e4882e1d1b7c67c2c4e8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f.scope/container/memory.events
Dec  1 04:10:57 np0005540697 podman[224420]: 2025-12-01 09:10:57.165641164 +0000 UTC m=+0.333789113 container stop e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=edpm, tcib_managed=true)
Dec  1 04:10:57 np0005540697 podman[224420]: 2025-12-01 09:10:57.167558951 +0000 UTC m=+0.335706900 container died e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 04:10:57 np0005540697 systemd[1]: e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f-7341fc6a74223a01.timer: Deactivated successfully.
Dec  1 04:10:57 np0005540697 systemd[1]: Stopped /usr/bin/podman healthcheck run e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f.
Dec  1 04:10:57 np0005540697 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f-userdata-shm.mount: Deactivated successfully.
Dec  1 04:10:57 np0005540697 systemd[1]: var-lib-containers-storage-overlay-4f8a38993d6798196bb3607c474ecc4d5e0149b4d2bd5be34de9fd8d03d9a74e-merged.mount: Deactivated successfully.
Dec  1 04:10:57 np0005540697 podman[224420]: 2025-12-01 09:10:57.228106655 +0000 UTC m=+0.396254614 container cleanup e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm)
Dec  1 04:10:57 np0005540697 podman[224420]: ceilometer_agent_ipmi
Dec  1 04:10:57 np0005540697 podman[224450]: ceilometer_agent_ipmi
Dec  1 04:10:57 np0005540697 systemd[1]: edpm_ceilometer_agent_ipmi.service: Deactivated successfully.
Dec  1 04:10:57 np0005540697 systemd[1]: Stopped ceilometer_agent_ipmi container.
Dec  1 04:10:57 np0005540697 systemd[1]: Starting ceilometer_agent_ipmi container...
Dec  1 04:10:57 np0005540697 systemd[1]: Started libcrun container.
Dec  1 04:10:57 np0005540697 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f8a38993d6798196bb3607c474ecc4d5e0149b4d2bd5be34de9fd8d03d9a74e/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Dec  1 04:10:57 np0005540697 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f8a38993d6798196bb3607c474ecc4d5e0149b4d2bd5be34de9fd8d03d9a74e/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec  1 04:10:57 np0005540697 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f8a38993d6798196bb3607c474ecc4d5e0149b4d2bd5be34de9fd8d03d9a74e/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff)
Dec  1 04:10:57 np0005540697 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f8a38993d6798196bb3607c474ecc4d5e0149b4d2bd5be34de9fd8d03d9a74e/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Dec  1 04:10:57 np0005540697 systemd[1]: Started /usr/bin/podman healthcheck run e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f.
Dec  1 04:10:57 np0005540697 podman[224463]: 2025-12-01 09:10:57.644629712 +0000 UTC m=+0.239839800 container init e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 04:10:57 np0005540697 ceilometer_agent_ipmi[224477]: + sudo -E kolla_set_configs
Dec  1 04:10:57 np0005540697 podman[224463]: 2025-12-01 09:10:57.693161887 +0000 UTC m=+0.288371925 container start e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0)
Dec  1 04:10:57 np0005540697 podman[224463]: ceilometer_agent_ipmi
Dec  1 04:10:57 np0005540697 systemd[1]: Started ceilometer_agent_ipmi container.
Dec  1 04:10:57 np0005540697 ceilometer_agent_ipmi[224477]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec  1 04:10:57 np0005540697 ceilometer_agent_ipmi[224477]: INFO:__main__:Validating config file
Dec  1 04:10:57 np0005540697 ceilometer_agent_ipmi[224477]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec  1 04:10:57 np0005540697 ceilometer_agent_ipmi[224477]: INFO:__main__:Copying service configuration files
Dec  1 04:10:57 np0005540697 ceilometer_agent_ipmi[224477]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Dec  1 04:10:57 np0005540697 ceilometer_agent_ipmi[224477]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Dec  1 04:10:57 np0005540697 ceilometer_agent_ipmi[224477]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Dec  1 04:10:57 np0005540697 ceilometer_agent_ipmi[224477]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Dec  1 04:10:57 np0005540697 ceilometer_agent_ipmi[224477]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml
Dec  1 04:10:57 np0005540697 ceilometer_agent_ipmi[224477]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Dec  1 04:10:57 np0005540697 ceilometer_agent_ipmi[224477]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec  1 04:10:57 np0005540697 ceilometer_agent_ipmi[224477]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec  1 04:10:57 np0005540697 ceilometer_agent_ipmi[224477]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec  1 04:10:57 np0005540697 ceilometer_agent_ipmi[224477]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec  1 04:10:57 np0005540697 ceilometer_agent_ipmi[224477]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec  1 04:10:57 np0005540697 ceilometer_agent_ipmi[224477]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec  1 04:10:57 np0005540697 ceilometer_agent_ipmi[224477]: INFO:__main__:Writing out command to execute
Dec  1 04:10:57 np0005540697 ceilometer_agent_ipmi[224477]: ++ cat /run_command
Dec  1 04:10:57 np0005540697 ceilometer_agent_ipmi[224477]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Dec  1 04:10:57 np0005540697 ceilometer_agent_ipmi[224477]: + ARGS=
Dec  1 04:10:57 np0005540697 ceilometer_agent_ipmi[224477]: + sudo kolla_copy_cacerts
Dec  1 04:10:57 np0005540697 ceilometer_agent_ipmi[224477]: + [[ ! -n '' ]]
Dec  1 04:10:57 np0005540697 ceilometer_agent_ipmi[224477]: + . kolla_extend_start
Dec  1 04:10:57 np0005540697 ceilometer_agent_ipmi[224477]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Dec  1 04:10:57 np0005540697 ceilometer_agent_ipmi[224477]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'\'''
Dec  1 04:10:57 np0005540697 ceilometer_agent_ipmi[224477]: + umask 0022
Dec  1 04:10:57 np0005540697 ceilometer_agent_ipmi[224477]: + exec /usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout
Dec  1 04:10:57 np0005540697 podman[224484]: 2025-12-01 09:10:57.817582768 +0000 UTC m=+0.106155448 container health_status e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec  1 04:10:57 np0005540697 systemd[1]: e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f-4428a816076b5484.service: Main process exited, code=exited, status=1/FAILURE
Dec  1 04:10:57 np0005540697 systemd[1]: e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f-4428a816076b5484.service: Failed with result 'exit-code'.
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.630 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:40
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.631 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.631 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.631 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.631 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.631 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.631 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.631 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.631 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.631 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.632 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.632 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.632 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.632 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.632 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.632 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.632 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.632 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.632 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.632 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.632 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.633 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.633 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.633 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.633 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.633 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.633 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.633 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.633 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.633 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.634 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.634 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.634 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.634 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.634 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.634 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.634 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.634 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.634 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.634 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.634 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.635 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.635 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.635 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.635 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.635 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.635 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.635 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.635 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.635 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.635 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.635 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.636 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.636 2 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.636 2 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.636 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.636 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.636 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.636 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.636 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.636 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.636 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.636 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.636 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.637 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.637 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.637 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.637 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.637 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.637 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.637 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.637 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.637 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.637 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.638 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.638 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.638 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.638 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.638 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.638 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.638 2 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.638 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.638 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.638 2 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.638 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.638 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.639 2 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.639 2 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.639 2 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.639 2 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.639 2 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.639 2 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.639 2 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.639 2 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.639 2 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.639 2 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.639 2 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.640 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.640 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.640 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.640 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.640 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.640 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.640 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.640 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.640 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.640 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.641 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.641 2 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.641 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.641 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.641 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.641 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.641 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.641 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.641 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.641 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.641 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.642 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.642 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.642 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.642 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.642 2 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.642 2 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.642 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.642 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.642 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.642 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.642 2 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.643 2 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.643 2 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.643 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.643 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.643 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.643 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.643 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.643 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.643 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.643 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.643 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.644 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.644 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.644 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.644 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.644 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.644 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.644 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.644 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.644 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.644 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.644 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.644 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.645 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.645 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.645 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.645 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.645 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.645 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.645 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.645 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.645 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.645 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.645 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.646 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.646 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.668 12 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.671 12 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.673 12 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Dec  1 04:10:58 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:58.701 12 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'ceilometer-rootwrap', '/etc/ceilometer/rootwrap.conf', 'privsep-helper', '--privsep_context', 'ceilometer.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmpj5c5wbjr/privsep.sock']
Dec  1 04:10:58 np0005540697 python3.9[224658]: ansible-ansible.builtin.systemd Invoked with name=edpm_kepler.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  1 04:10:58 np0005540697 systemd[1]: Stopping kepler container...
Dec  1 04:10:59 np0005540697 kepler[224224]: I1201 09:10:59.003846       1 exporter.go:218] Received shutdown signal
Dec  1 04:10:59 np0005540697 kepler[224224]: I1201 09:10:59.004398       1 exporter.go:226] Exiting...
Dec  1 04:10:59 np0005540697 systemd[1]: libpod-f2d8639519b361376780abb83cb5a5e341c4f412720fb5852b2a4d261a69c359.scope: Deactivated successfully.
Dec  1 04:10:59 np0005540697 podman[224669]: 2025-12-01 09:10:59.208563069 +0000 UTC m=+0.271481524 container died f2d8639519b361376780abb83cb5a5e341c4f412720fb5852b2a4d261a69c359 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.expose-services=, com.redhat.component=ubi9-container, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, distribution-scope=public, release-0.7.12=, version=9.4, name=ubi9, io.buildah.version=1.29.0, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  1 04:10:59 np0005540697 systemd[1]: f2d8639519b361376780abb83cb5a5e341c4f412720fb5852b2a4d261a69c359-6b355f8ed4d59184.timer: Deactivated successfully.
Dec  1 04:10:59 np0005540697 systemd[1]: Stopped /usr/bin/podman healthcheck run f2d8639519b361376780abb83cb5a5e341c4f412720fb5852b2a4d261a69c359.
Dec  1 04:10:59 np0005540697 systemd[1]: var-lib-containers-storage-overlay-dc2aebc8c86a3334ed58f51315a1d666fdb33b9c115bec232caaf97c6f1b2f05-merged.mount: Deactivated successfully.
Dec  1 04:10:59 np0005540697 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f2d8639519b361376780abb83cb5a5e341c4f412720fb5852b2a4d261a69c359-userdata-shm.mount: Deactivated successfully.
Dec  1 04:10:59 np0005540697 podman[224669]: 2025-12-01 09:10:59.265209866 +0000 UTC m=+0.328128321 container cleanup f2d8639519b361376780abb83cb5a5e341c4f412720fb5852b2a4d261a69c359 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, distribution-scope=public, io.buildah.version=1.29.0, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, release=1214.1726694543, maintainer=Red Hat, Inc., vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=edpm, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, build-date=2024-09-18T21:23:30)
Dec  1 04:10:59 np0005540697 podman[224669]: kepler
Dec  1 04:10:59 np0005540697 podman[224700]: kepler
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.354 12 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.355 12 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpj5c5wbjr/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.248 19 INFO oslo.privsep.daemon [-] privsep daemon starting
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.256 19 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.260 19 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.260 19 INFO oslo.privsep.daemon [-] privsep daemon running as pid 19
Dec  1 04:10:59 np0005540697 systemd[1]: edpm_kepler.service: Deactivated successfully.
Dec  1 04:10:59 np0005540697 systemd[1]: Stopped kepler container.
Dec  1 04:10:59 np0005540697 systemd[1]: Starting kepler container...
Dec  1 04:10:59 np0005540697 systemd[1]: Started libcrun container.
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.503 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.current: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.503 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.fan: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.505 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.airflow: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.506 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cpu_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.506 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cups: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.506 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.io_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.506 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.mem_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.507 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.outlet_temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.507 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.power: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.507 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.507 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.temperature: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.508 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.voltage: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.508 12 WARNING ceilometer.polling.manager [-] No valid pollsters can be loaded from ['ipmi'] namespaces
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.513 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:48
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.513 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.514 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.514 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.514 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.514 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.514 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.515 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.515 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.515 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.515 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.515 12 DEBUG cotyledon.oslo_config_glue [-] control_exchange               = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.516 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.516 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.516 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.516 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.517 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.517 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.517 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.517 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.518 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.518 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.518 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.518 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.518 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.519 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.519 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.519 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.519 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.519 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.520 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.520 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.520 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.520 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.520 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.520 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.521 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.521 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.521 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.521 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.521 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.522 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.522 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.522 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.522 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.522 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.523 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.523 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.523 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.523 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.523 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.524 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.524 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.524 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.524 12 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.524 12 DEBUG cotyledon.oslo_config_glue [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.524 12 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.525 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.525 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.525 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.525 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.525 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.526 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.526 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.526 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.526 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.527 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.527 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.527 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.527 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.528 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.528 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 systemd[1]: Started /usr/bin/podman healthcheck run f2d8639519b361376780abb83cb5a5e341c4f412720fb5852b2a4d261a69c359.
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.529 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.529 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.529 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.529 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.530 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.530 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.530 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.530 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.530 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.531 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.531 12 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.531 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.531 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.531 12 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.531 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.532 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.532 12 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.532 12 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.532 12 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.532 12 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.533 12 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.533 12 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.533 12 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.533 12 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.534 12 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.534 12 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.534 12 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.534 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.534 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.534 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.534 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.535 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.535 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.535 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.535 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.535 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.535 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 podman[224715]: 2025-12-01 09:10:59.535791646 +0000 UTC m=+0.143468314 container init f2d8639519b361376780abb83cb5a5e341c4f412720fb5852b2a4d261a69c359 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, io.openshift.tags=base rhel9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, version=9.4, build-date=2024-09-18T21:23:30, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, com.redhat.component=ubi9-container, config_id=edpm, io.openshift.expose-services=, io.buildah.version=1.29.0, name=ubi9, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., container_name=kepler, maintainer=Red Hat, Inc.)
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.535 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.536 12 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.536 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.536 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.536 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.536 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.536 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.536 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.536 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.536 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.537 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.537 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.537 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.537 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.537 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.537 12 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.537 12 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.538 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.538 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.538 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.538 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.538 12 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.538 12 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.538 12 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.538 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.538 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.539 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.539 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.539 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.539 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.539 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.539 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.539 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.539 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.539 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.540 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.540 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.540 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.540 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.540 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.540 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.540 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.540 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.541 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.541 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.541 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.541 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.541 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.541 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.541 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.541 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.541 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.542 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.542 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.542 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.542 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.542 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.542 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.542 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.542 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.542 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.543 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.543 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.543 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.543 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.543 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.543 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.543 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.543 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.544 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.544 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.544 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.544 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.544 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.544 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.544 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.544 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.545 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.545 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.545 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.545 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.545 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.545 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.545 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.545 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.545 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.546 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.546 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.546 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.546 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.546 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.546 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.546 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.546 12 DEBUG cotyledon._service [-] Run service AgentManager(0) [12] wait_forever /usr/lib/python3.9/site-packages/cotyledon/_service.py:241
Dec  1 04:10:59 np0005540697 ceilometer_agent_ipmi[224477]: 2025-12-01 09:10:59.550 12 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['hardware.*']}]} load_config /usr/lib/python3.9/site-packages/ceilometer/agent.py:64
Dec  1 04:10:59 np0005540697 kepler[224730]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Dec  1 04:10:59 np0005540697 kepler[224730]: I1201 09:10:59.567459       1 exporter.go:103] Kepler running on version: v0.7.12-dirty
Dec  1 04:10:59 np0005540697 kepler[224730]: I1201 09:10:59.567579       1 config.go:293] using gCgroup ID in the BPF program: true
Dec  1 04:10:59 np0005540697 kepler[224730]: I1201 09:10:59.567592       1 config.go:295] kernel version: 5.14
Dec  1 04:10:59 np0005540697 kepler[224730]: I1201 09:10:59.568305       1 power.go:78] Unable to obtain power, use estimate method
Dec  1 04:10:59 np0005540697 kepler[224730]: I1201 09:10:59.568325       1 redfish.go:169] failed to get redfish credential file path
Dec  1 04:10:59 np0005540697 kepler[224730]: I1201 09:10:59.568660       1 acpi.go:71] Could not find any ACPI power meter path. Is it a VM?
Dec  1 04:10:59 np0005540697 kepler[224730]: I1201 09:10:59.568672       1 power.go:79] using none to obtain power
Dec  1 04:10:59 np0005540697 kepler[224730]: E1201 09:10:59.568688       1 accelerator.go:154] [DUMMY] doesn't contain GPU
Dec  1 04:10:59 np0005540697 kepler[224730]: E1201 09:10:59.568707       1 exporter.go:154] failed to init GPU accelerators: no devices found
Dec  1 04:10:59 np0005540697 kepler[224730]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Dec  1 04:10:59 np0005540697 kepler[224730]: I1201 09:10:59.570272       1 exporter.go:84] Number of CPUs: 8
Dec  1 04:10:59 np0005540697 podman[224715]: 2025-12-01 09:10:59.573954325 +0000 UTC m=+0.181630993 container start f2d8639519b361376780abb83cb5a5e341c4f412720fb5852b2a4d261a69c359 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, io.openshift.expose-services=, distribution-scope=public, io.buildah.version=1.29.0, name=ubi9, vcs-type=git, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., version=9.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., release-0.7.12=, architecture=x86_64, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible)
Dec  1 04:10:59 np0005540697 podman[224715]: kepler
Dec  1 04:10:59 np0005540697 systemd[1]: Started kepler container.
Dec  1 04:10:59 np0005540697 podman[224742]: 2025-12-01 09:10:59.644154019 +0000 UTC m=+0.058949246 container health_status f2d8639519b361376780abb83cb5a5e341c4f412720fb5852b2a4d261a69c359 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=starting, health_failing_streak=1, health_log=, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, vendor=Red Hat, Inc., version=9.4, managed_by=edpm_ansible, maintainer=Red Hat, Inc., release=1214.1726694543, vcs-type=git, architecture=x86_64, config_id=edpm, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.expose-services=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.tags=base rhel9, name=ubi9, distribution-scope=public)
Dec  1 04:10:59 np0005540697 systemd[1]: f2d8639519b361376780abb83cb5a5e341c4f412720fb5852b2a4d261a69c359-28d034e3e0268b46.service: Main process exited, code=exited, status=1/FAILURE
Dec  1 04:10:59 np0005540697 systemd[1]: f2d8639519b361376780abb83cb5a5e341c4f412720fb5852b2a4d261a69c359-28d034e3e0268b46.service: Failed with result 'exit-code'.
Dec  1 04:10:59 np0005540697 podman[203700]: time="2025-12-01T09:10:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 04:10:59 np0005540697 podman[203700]: @ - - [01/Dec/2025:09:10:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28294 "" "Go-http-client/1.1"
Dec  1 04:10:59 np0005540697 podman[203700]: @ - - [01/Dec/2025:09:10:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4256 "" "Go-http-client/1.1"
Dec  1 04:11:00 np0005540697 kepler[224730]: I1201 09:11:00.123948       1 watcher.go:83] Using in cluster k8s config
Dec  1 04:11:00 np0005540697 kepler[224730]: I1201 09:11:00.124104       1 watcher.go:90] failed to get config: unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined
Dec  1 04:11:00 np0005540697 kepler[224730]: E1201 09:11:00.124254       1 manager.go:59] could not run the watcher k8s APIserver watcher was not enabled
Dec  1 04:11:00 np0005540697 kepler[224730]: I1201 09:11:00.127782       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_COMPONENTS Power
Dec  1 04:11:00 np0005540697 kepler[224730]: I1201 09:11:00.127853       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms bpf_cpu_time_ms bpf_cpu_time_ms   gpu_compute_util]
Dec  1 04:11:00 np0005540697 kepler[224730]: I1201 09:11:00.131899       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_TOTAL Power
Dec  1 04:11:00 np0005540697 kepler[224730]: I1201 09:11:00.131934       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms]
Dec  1 04:11:00 np0005540697 kepler[224730]: I1201 09:11:00.144631       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec  1 04:11:00 np0005540697 kepler[224730]: I1201 09:11:00.144671       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Dec  1 04:11:00 np0005540697 kepler[224730]: I1201 09:11:00.144687       1 node_platform_energy.go:53] Using the Regressor/AbsPower Power Model to estimate Node Platform Power
Dec  1 04:11:00 np0005540697 kepler[224730]: I1201 09:11:00.152101       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec  1 04:11:00 np0005540697 kepler[224730]: I1201 09:11:00.152136       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec  1 04:11:00 np0005540697 kepler[224730]: I1201 09:11:00.152142       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec  1 04:11:00 np0005540697 kepler[224730]: I1201 09:11:00.152147       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec  1 04:11:00 np0005540697 kepler[224730]: I1201 09:11:00.152155       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Dec  1 04:11:00 np0005540697 kepler[224730]: I1201 09:11:00.152170       1 node_component_energy.go:57] Using the Regressor/AbsPower Power Model to estimate Node Component Power
Dec  1 04:11:00 np0005540697 kepler[224730]: I1201 09:11:00.152263       1 prometheus_collector.go:90] Registered Process Prometheus metrics
Dec  1 04:11:00 np0005540697 kepler[224730]: I1201 09:11:00.152294       1 prometheus_collector.go:95] Registered Container Prometheus metrics
Dec  1 04:11:00 np0005540697 kepler[224730]: I1201 09:11:00.152323       1 prometheus_collector.go:100] Registered VM Prometheus metrics
Dec  1 04:11:00 np0005540697 kepler[224730]: I1201 09:11:00.152385       1 prometheus_collector.go:104] Registered Node Prometheus metrics
Dec  1 04:11:00 np0005540697 kepler[224730]: I1201 09:11:00.152484       1 exporter.go:194] starting to listen on 0.0.0.0:8888
Dec  1 04:11:00 np0005540697 kepler[224730]: I1201 09:11:00.153134       1 exporter.go:208] Started Kepler in 585.853711ms
Dec  1 04:11:00 np0005540697 python3.9[224916]: ansible-ansible.builtin.find Invoked with file_type=directory paths=['/var/lib/openstack/healthchecks/'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec  1 04:11:01 np0005540697 openstack_network_exporter[205866]: ERROR   09:11:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 04:11:01 np0005540697 openstack_network_exporter[205866]: ERROR   09:11:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 04:11:01 np0005540697 openstack_network_exporter[205866]: ERROR   09:11:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 04:11:01 np0005540697 openstack_network_exporter[205866]: ERROR   09:11:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 04:11:01 np0005540697 openstack_network_exporter[205866]: 
Dec  1 04:11:01 np0005540697 openstack_network_exporter[205866]: ERROR   09:11:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 04:11:01 np0005540697 openstack_network_exporter[205866]: 
Dec  1 04:11:01 np0005540697 podman[225050]: 2025-12-01 09:11:01.493152616 +0000 UTC m=+0.115731155 container health_status dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  1 04:11:01 np0005540697 python3.9[225101]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_controller'] executable=podman
Dec  1 04:11:02 np0005540697 python3.9[225265]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  1 04:11:03 np0005540697 systemd[1]: Started libpod-conmon-8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4.scope.
Dec  1 04:11:03 np0005540697 podman[225266]: 2025-12-01 09:11:03.049697339 +0000 UTC m=+0.128569065 container exec 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  1 04:11:03 np0005540697 podman[225266]: 2025-12-01 09:11:03.062799744 +0000 UTC m=+0.141671470 container exec_died 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=ovn_controller)
Dec  1 04:11:03 np0005540697 systemd[1]: libpod-conmon-8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4.scope: Deactivated successfully.
Dec  1 04:11:04 np0005540697 python3.9[225448]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  1 04:11:04 np0005540697 systemd[1]: Started libpod-conmon-8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4.scope.
Dec  1 04:11:04 np0005540697 podman[225449]: 2025-12-01 09:11:04.201338385 +0000 UTC m=+0.142022559 container exec 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec  1 04:11:04 np0005540697 podman[225449]: 2025-12-01 09:11:04.210748398 +0000 UTC m=+0.151432572 container exec_died 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2)
Dec  1 04:11:04 np0005540697 systemd[1]: libpod-conmon-8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4.scope: Deactivated successfully.
Dec  1 04:11:05 np0005540697 python3.9[225629]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_controller recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:11:05 np0005540697 podman[225729]: 2025-12-01 09:11:05.707781104 +0000 UTC m=+0.077266501 container health_status f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec  1 04:11:06 np0005540697 python3.9[225801]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_metadata_agent'] executable=podman
Dec  1 04:11:06 np0005540697 podman[225883]: 2025-12-01 09:11:06.769331412 +0000 UTC m=+0.115258335 container health_status 110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.openshift.expose-services=, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, release=1755695350, com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, vcs-type=git, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6)
Dec  1 04:11:07 np0005540697 python3.9[225985]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  1 04:11:07 np0005540697 systemd[1]: Started libpod-conmon-f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed.scope.
Dec  1 04:11:07 np0005540697 podman[225986]: 2025-12-01 09:11:07.70369093 +0000 UTC m=+0.441517377 container exec f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  1 04:11:07 np0005540697 podman[226002]: 2025-12-01 09:11:07.779463442 +0000 UTC m=+0.062294538 container exec_died f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 04:11:07 np0005540697 podman[225986]: 2025-12-01 09:11:07.786756973 +0000 UTC m=+0.524583410 container exec_died f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec  1 04:11:07 np0005540697 systemd[1]: libpod-conmon-f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed.scope: Deactivated successfully.
Dec  1 04:11:08 np0005540697 podman[226137]: 2025-12-01 09:11:08.57812112 +0000 UTC m=+0.097474592 container health_status 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.license=GPLv2)
Dec  1 04:11:08 np0005540697 podman[226138]: 2025-12-01 09:11:08.642479219 +0000 UTC m=+0.148957512 container health_status 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  1 04:11:08 np0005540697 python3.9[226204]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  1 04:11:08 np0005540697 systemd[1]: Started libpod-conmon-f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed.scope.
Dec  1 04:11:08 np0005540697 podman[226209]: 2025-12-01 09:11:08.95134787 +0000 UTC m=+0.129952979 container exec f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 04:11:08 np0005540697 podman[226209]: 2025-12-01 09:11:08.985897989 +0000 UTC m=+0.164503058 container exec_died f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  1 04:11:09 np0005540697 systemd[1]: libpod-conmon-f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed.scope: Deactivated successfully.
Dec  1 04:11:09 np0005540697 python3.9[226395]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_metadata_agent recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:11:10 np0005540697 nova_compute[189491]: 2025-12-01 09:11:10.714 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 04:11:10 np0005540697 nova_compute[189491]: 2025-12-01 09:11:10.714 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 04:11:10 np0005540697 nova_compute[189491]: 2025-12-01 09:11:10.715 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 04:11:10 np0005540697 nova_compute[189491]: 2025-12-01 09:11:10.715 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 04:11:10 np0005540697 nova_compute[189491]: 2025-12-01 09:11:10.742 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 04:11:10 np0005540697 nova_compute[189491]: 2025-12-01 09:11:10.742 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 04:11:10 np0005540697 nova_compute[189491]: 2025-12-01 09:11:10.742 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 04:11:10 np0005540697 nova_compute[189491]: 2025-12-01 09:11:10.742 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 04:11:11 np0005540697 python3.9[226547]: ansible-containers.podman.podman_container_info Invoked with name=['multipathd'] executable=podman
Dec  1 04:11:11 np0005540697 nova_compute[189491]: 2025-12-01 09:11:11.051 189495 WARNING nova.virt.libvirt.driver [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 04:11:11 np0005540697 nova_compute[189491]: 2025-12-01 09:11:11.052 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5775MB free_disk=72.44132995605469GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 04:11:11 np0005540697 nova_compute[189491]: 2025-12-01 09:11:11.053 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 04:11:11 np0005540697 nova_compute[189491]: 2025-12-01 09:11:11.053 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 04:11:11 np0005540697 nova_compute[189491]: 2025-12-01 09:11:11.130 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 04:11:11 np0005540697 nova_compute[189491]: 2025-12-01 09:11:11.131 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 04:11:11 np0005540697 nova_compute[189491]: 2025-12-01 09:11:11.155 189495 DEBUG nova.compute.provider_tree [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Inventory has not changed in ProviderTree for provider: 143c7fe7-af1f-477a-978c-6a994d785d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 04:11:11 np0005540697 nova_compute[189491]: 2025-12-01 09:11:11.168 189495 DEBUG nova.scheduler.client.report [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Inventory has not changed for provider 143c7fe7-af1f-477a-978c-6a994d785d98 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 04:11:11 np0005540697 nova_compute[189491]: 2025-12-01 09:11:11.170 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 04:11:11 np0005540697 nova_compute[189491]: 2025-12-01 09:11:11.170 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.117s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 04:11:12 np0005540697 python3.9[226713]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=multipathd detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  1 04:11:12 np0005540697 systemd[1]: Started libpod-conmon-5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed.scope.
Dec  1 04:11:12 np0005540697 nova_compute[189491]: 2025-12-01 09:11:12.170 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 04:11:12 np0005540697 podman[226714]: 2025-12-01 09:11:12.186687423 +0000 UTC m=+0.087959986 container exec 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  1 04:11:12 np0005540697 podman[226714]: 2025-12-01 09:11:12.218647887 +0000 UTC m=+0.119920450 container exec_died 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, container_name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  1 04:11:12 np0005540697 systemd[1]: libpod-conmon-5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed.scope: Deactivated successfully.
Dec  1 04:11:12 np0005540697 nova_compute[189491]: 2025-12-01 09:11:12.714 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 04:11:12 np0005540697 nova_compute[189491]: 2025-12-01 09:11:12.714 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 04:11:12 np0005540697 nova_compute[189491]: 2025-12-01 09:11:12.714 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 04:11:13 np0005540697 python3.9[226895]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=multipathd detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  1 04:11:13 np0005540697 systemd[1]: Started libpod-conmon-5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed.scope.
Dec  1 04:11:13 np0005540697 podman[226896]: 2025-12-01 09:11:13.361027803 +0000 UTC m=+0.117296055 container exec 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, tcib_managed=true)
Dec  1 04:11:13 np0005540697 podman[226896]: 2025-12-01 09:11:13.393167071 +0000 UTC m=+0.149435333 container exec_died 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible)
Dec  1 04:11:13 np0005540697 systemd[1]: libpod-conmon-5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed.scope: Deactivated successfully.
Dec  1 04:11:13 np0005540697 nova_compute[189491]: 2025-12-01 09:11:13.709 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 04:11:13 np0005540697 nova_compute[189491]: 2025-12-01 09:11:13.713 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 04:11:13 np0005540697 nova_compute[189491]: 2025-12-01 09:11:13.713 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 04:11:13 np0005540697 nova_compute[189491]: 2025-12-01 09:11:13.713 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 04:11:13 np0005540697 nova_compute[189491]: 2025-12-01 09:11:13.735 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  1 04:11:14 np0005540697 python3.9[227078]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/multipathd recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:11:14 np0005540697 nova_compute[189491]: 2025-12-01 09:11:14.731 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 04:11:15 np0005540697 python3.9[227230]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_compute'] executable=podman
Dec  1 04:11:16 np0005540697 python3.9[227396]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  1 04:11:16 np0005540697 systemd[1]: Started libpod-conmon-ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c.scope.
Dec  1 04:11:16 np0005540697 podman[227397]: 2025-12-01 09:11:16.910034276 +0000 UTC m=+0.110970317 container exec ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute)
Dec  1 04:11:16 np0005540697 podman[227397]: 2025-12-01 09:11:16.943794784 +0000 UTC m=+0.144730805 container exec_died ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, tcib_managed=true, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4)
Dec  1 04:11:16 np0005540697 systemd[1]: libpod-conmon-ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c.scope: Deactivated successfully.
Dec  1 04:11:17 np0005540697 python3.9[227579]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  1 04:11:18 np0005540697 systemd[1]: Started libpod-conmon-ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c.scope.
Dec  1 04:11:18 np0005540697 podman[227580]: 2025-12-01 09:11:18.049176101 +0000 UTC m=+0.119154010 container exec ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec  1 04:11:18 np0005540697 podman[227580]: 2025-12-01 09:11:18.083182016 +0000 UTC m=+0.153159925 container exec_died ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute)
Dec  1 04:11:18 np0005540697 systemd[1]: libpod-conmon-ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c.scope: Deactivated successfully.
Dec  1 04:11:19 np0005540697 python3.9[227760]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_compute recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:11:20 np0005540697 podman[227885]: 2025-12-01 09:11:20.240181934 +0000 UTC m=+0.100970499 container health_status 6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  1 04:11:20 np0005540697 python3.9[227935]: ansible-containers.podman.podman_container_info Invoked with name=['node_exporter'] executable=podman
Dec  1 04:11:21 np0005540697 python3.9[228104]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  1 04:11:21 np0005540697 podman[228105]: 2025-12-01 09:11:21.95915753 +0000 UTC m=+0.126611265 container health_status ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.license=GPLv2)
Dec  1 04:11:21 np0005540697 systemd[1]: Started libpod-conmon-dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30.scope.
Dec  1 04:11:21 np0005540697 podman[228114]: 2025-12-01 09:11:21.998427087 +0000 UTC m=+0.125548360 container exec dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  1 04:11:22 np0005540697 podman[228114]: 2025-12-01 09:11:22.030358999 +0000 UTC m=+0.157480252 container exec_died dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 04:11:22 np0005540697 systemd[1]: libpod-conmon-dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30.scope: Deactivated successfully.
Dec  1 04:11:22 np0005540697 python3.9[228303]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  1 04:11:23 np0005540697 systemd[1]: Started libpod-conmon-dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30.scope.
Dec  1 04:11:23 np0005540697 podman[228304]: 2025-12-01 09:11:23.204433192 +0000 UTC m=+0.186472702 container exec dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  1 04:11:23 np0005540697 podman[228324]: 2025-12-01 09:11:23.316584837 +0000 UTC m=+0.100294742 container exec_died dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  1 04:11:23 np0005540697 podman[228304]: 2025-12-01 09:11:23.369723508 +0000 UTC m=+0.351763018 container exec_died dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  1 04:11:23 np0005540697 systemd[1]: libpod-conmon-dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30.scope: Deactivated successfully.
Dec  1 04:11:24 np0005540697 python3.9[228484]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/node_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:11:25 np0005540697 python3.9[228636]: ansible-containers.podman.podman_container_info Invoked with name=['podman_exporter'] executable=podman
Dec  1 04:11:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 09:11:26.492 106659 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 04:11:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 09:11:26.494 106659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 04:11:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 09:11:26.494 106659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 04:11:26 np0005540697 python3.9[228801]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  1 04:11:26 np0005540697 systemd[1]: Started libpod-conmon-6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62.scope.
Dec  1 04:11:26 np0005540697 podman[228802]: 2025-12-01 09:11:26.78504058 +0000 UTC m=+0.139710062 container exec 6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  1 04:11:26 np0005540697 podman[228802]: 2025-12-01 09:11:26.820526172 +0000 UTC m=+0.175195644 container exec_died 6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  1 04:11:26 np0005540697 systemd[1]: libpod-conmon-6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62.scope: Deactivated successfully.
Dec  1 04:11:27 np0005540697 python3.9[228981]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  1 04:11:27 np0005540697 systemd[1]: Started libpod-conmon-6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62.scope.
Dec  1 04:11:27 np0005540697 podman[228982]: 2025-12-01 09:11:27.946645603 +0000 UTC m=+0.112527626 container exec 6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  1 04:11:27 np0005540697 podman[228982]: 2025-12-01 09:11:27.981405187 +0000 UTC m=+0.147287160 container exec_died 6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  1 04:11:28 np0005540697 systemd[1]: libpod-conmon-6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62.scope: Deactivated successfully.
Dec  1 04:11:28 np0005540697 podman[228997]: 2025-12-01 09:11:28.080300773 +0000 UTC m=+0.126613256 container health_status e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=starting, health_failing_streak=2, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm)
Dec  1 04:11:28 np0005540697 systemd[1]: e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f-4428a816076b5484.service: Main process exited, code=exited, status=1/FAILURE
Dec  1 04:11:28 np0005540697 systemd[1]: e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f-4428a816076b5484.service: Failed with result 'exit-code'.
Dec  1 04:11:28 np0005540697 python3.9[229176]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/podman_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:11:29 np0005540697 python3.9[229328]: ansible-containers.podman.podman_container_info Invoked with name=['openstack_network_exporter'] executable=podman
Dec  1 04:11:29 np0005540697 podman[203700]: time="2025-12-01T09:11:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 04:11:29 np0005540697 podman[203700]: @ - - [01/Dec/2025:09:11:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28294 "" "Go-http-client/1.1"
Dec  1 04:11:29 np0005540697 podman[203700]: @ - - [01/Dec/2025:09:11:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4263 "" "Go-http-client/1.1"
Dec  1 04:11:30 np0005540697 podman[229465]: 2025-12-01 09:11:30.479608149 +0000 UTC m=+0.141412683 container health_status f2d8639519b361376780abb83cb5a5e341c4f412720fb5852b2a4d261a69c359 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, io.openshift.expose-services=, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., architecture=x86_64, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, container_name=kepler, version=9.4, config_id=edpm, distribution-scope=public, io.buildah.version=1.29.0, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, vcs-type=git, io.openshift.tags=base rhel9, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=)
Dec  1 04:11:30 np0005540697 python3.9[229511]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  1 04:11:30 np0005540697 systemd[1]: Started libpod-conmon-110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0.scope.
Dec  1 04:11:30 np0005540697 podman[229513]: 2025-12-01 09:11:30.822822424 +0000 UTC m=+0.116112025 container exec 110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, managed_by=edpm_ansible, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, vcs-type=git, config_id=edpm, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Dec  1 04:11:30 np0005540697 podman[229513]: 2025-12-01 09:11:30.858449069 +0000 UTC m=+0.151738660 container exec_died 110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, build-date=2025-08-20T13:12:41, config_id=edpm, io.openshift.expose-services=, name=ubi9-minimal, version=9.6, maintainer=Red Hat, Inc., managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9)
Dec  1 04:11:30 np0005540697 systemd[1]: libpod-conmon-110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0.scope: Deactivated successfully.
Dec  1 04:11:31 np0005540697 openstack_network_exporter[205866]: ERROR   09:11:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 04:11:31 np0005540697 openstack_network_exporter[205866]: ERROR   09:11:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 04:11:31 np0005540697 openstack_network_exporter[205866]: ERROR   09:11:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 04:11:31 np0005540697 openstack_network_exporter[205866]: ERROR   09:11:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 04:11:31 np0005540697 openstack_network_exporter[205866]: 
Dec  1 04:11:31 np0005540697 openstack_network_exporter[205866]: ERROR   09:11:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 04:11:31 np0005540697 openstack_network_exporter[205866]: 
Dec  1 04:11:31 np0005540697 python3.9[229694]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  1 04:11:31 np0005540697 podman[229695]: 2025-12-01 09:11:31.712346469 +0000 UTC m=+0.078938912 container health_status dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  1 04:11:31 np0005540697 systemd[1]: Started libpod-conmon-110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0.scope.
Dec  1 04:11:31 np0005540697 podman[229719]: 2025-12-01 09:11:31.835799296 +0000 UTC m=+0.105315798 container exec 110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, release=1755695350, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, config_id=edpm, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal)
Dec  1 04:11:31 np0005540697 podman[229719]: 2025-12-01 09:11:31.868671572 +0000 UTC m=+0.138188084 container exec_died 110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, architecture=x86_64, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vcs-type=git, config_id=edpm, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter)
Dec  1 04:11:31 np0005540697 systemd[1]: libpod-conmon-110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0.scope: Deactivated successfully.
Dec  1 04:11:32 np0005540697 python3.9[229902]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/openstack_network_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:11:33 np0005540697 python3.9[230056]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_ipmi'] executable=podman
Dec  1 04:11:34 np0005540697 python3.9[230220]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_ipmi detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  1 04:11:34 np0005540697 systemd[1]: Started libpod-conmon-e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f.scope.
Dec  1 04:11:34 np0005540697 podman[230221]: 2025-12-01 09:11:34.93031708 +0000 UTC m=+0.173955502 container exec e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 04:11:34 np0005540697 podman[230221]: 2025-12-01 09:11:34.964596051 +0000 UTC m=+0.208234383 container exec_died e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, io.buildah.version=1.41.3, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  1 04:11:35 np0005540697 systemd[1]: libpod-conmon-e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f.scope: Deactivated successfully.
Dec  1 04:11:35 np0005540697 python3.9[230400]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_ipmi detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  1 04:11:35 np0005540697 systemd[1]: Started libpod-conmon-e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f.scope.
Dec  1 04:11:36 np0005540697 podman[230401]: 2025-12-01 09:11:36.003898186 +0000 UTC m=+0.109361747 container exec e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  1 04:11:36 np0005540697 podman[230401]: 2025-12-01 09:11:36.040616929 +0000 UTC m=+0.146080490 container exec_died e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251125, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 04:11:36 np0005540697 systemd[1]: libpod-conmon-e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f.scope: Deactivated successfully.
Dec  1 04:11:36 np0005540697 podman[230417]: 2025-12-01 09:11:36.150540959 +0000 UTC m=+0.138668755 container health_status f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 04:11:36 np0005540697 python3.9[230596]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:11:37 np0005540697 podman[230597]: 2025-12-01 09:11:37.049012536 +0000 UTC m=+0.076504121 container health_status 110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, config_id=edpm, architecture=x86_64, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, container_name=openstack_network_exporter, name=ubi9-minimal, maintainer=Red Hat, Inc., version=9.6)
Dec  1 04:11:37 np0005540697 python3.9[230767]: ansible-containers.podman.podman_container_info Invoked with name=['kepler'] executable=podman
Dec  1 04:11:38 np0005540697 podman[230933]: 2025-12-01 09:11:38.704524819 +0000 UTC m=+0.077704181 container health_status 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  1 04:11:38 np0005540697 python3.9[230932]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=kepler detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  1 04:11:38 np0005540697 systemd[1]: Started libpod-conmon-f2d8639519b361376780abb83cb5a5e341c4f412720fb5852b2a4d261a69c359.scope.
Dec  1 04:11:38 np0005540697 podman[230954]: 2025-12-01 09:11:38.848633489 +0000 UTC m=+0.114616049 container health_status 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, config_id=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 04:11:38 np0005540697 podman[230960]: 2025-12-01 09:11:38.858741169 +0000 UTC m=+0.102724082 container exec f2d8639519b361376780abb83cb5a5e341c4f412720fb5852b2a4d261a69c359 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, container_name=kepler, maintainer=Red Hat, Inc., release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, config_id=edpm, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, io.openshift.expose-services=, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, vendor=Red Hat, Inc.)
Dec  1 04:11:38 np0005540697 podman[230960]: 2025-12-01 09:11:38.890679103 +0000 UTC m=+0.134662006 container exec_died f2d8639519b361376780abb83cb5a5e341c4f412720fb5852b2a4d261a69c359 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, io.openshift.expose-services=, vendor=Red Hat, Inc., config_id=edpm, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release-0.7.12=, version=9.4, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  1 04:11:38 np0005540697 systemd[1]: libpod-conmon-f2d8639519b361376780abb83cb5a5e341c4f412720fb5852b2a4d261a69c359.scope: Deactivated successfully.
Dec  1 04:11:39 np0005540697 python3.9[231163]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=kepler detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  1 04:11:39 np0005540697 systemd[1]: Started libpod-conmon-f2d8639519b361376780abb83cb5a5e341c4f412720fb5852b2a4d261a69c359.scope.
Dec  1 04:11:39 np0005540697 podman[231164]: 2025-12-01 09:11:39.967628254 +0000 UTC m=+0.090698233 container exec f2d8639519b361376780abb83cb5a5e341c4f412720fb5852b2a4d261a69c359 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, vendor=Red Hat, Inc., architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, version=9.4, release=1214.1726694543)
Dec  1 04:11:40 np0005540697 podman[231164]: 2025-12-01 09:11:40.001811584 +0000 UTC m=+0.124881563 container exec_died f2d8639519b361376780abb83cb5a5e341c4f412720fb5852b2a4d261a69c359 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, com.redhat.component=ubi9-container, config_id=edpm, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, release=1214.1726694543, distribution-scope=public, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, io.openshift.expose-services=, vendor=Red Hat, Inc., release-0.7.12=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, version=9.4)
Dec  1 04:11:40 np0005540697 systemd[1]: libpod-conmon-f2d8639519b361376780abb83cb5a5e341c4f412720fb5852b2a4d261a69c359.scope: Deactivated successfully.
Dec  1 04:11:40 np0005540697 python3.9[231345]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/kepler recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:11:41 np0005540697 python3.9[231497]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall/ state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:11:42 np0005540697 python3.9[231649]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/kepler.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:11:43 np0005540697 python3.9[231772]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/kepler.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764580301.962705-844-238207534947144/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=40b8960d32c81de936cddbeb137a8240ecc54e7b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:11:44 np0005540697 python3.9[231924]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:11:45 np0005540697 python3.9[232076]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:11:46 np0005540697 python3.9[232154]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:11:47 np0005540697 python3.9[232306]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:11:47 np0005540697 python3.9[232384]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.l8z9y16c recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:11:48 np0005540697 python3.9[232536]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:11:49 np0005540697 python3.9[232614]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:11:50 np0005540697 python3.9[232767]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:11:50 np0005540697 podman[232845]: 2025-12-01 09:11:50.694457253 +0000 UTC m=+0.071274007 container health_status 6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  1 04:11:51 np0005540697 python3[232942]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec  1 04:11:52 np0005540697 python3.9[233094]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:11:52 np0005540697 podman[233144]: 2025-12-01 09:11:52.655320934 +0000 UTC m=+0.079302723 container health_status ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec  1 04:11:52 np0005540697 python3.9[233191]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:11:53 np0005540697 python3.9[233343]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:11:54 np0005540697 python3.9[233421]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:11:55 np0005540697 python3.9[233575]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:11:55 np0005540697 python3.9[233653]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:11:56 np0005540697 python3.9[233805]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:11:57 np0005540697 python3.9[233883]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:11:58 np0005540697 podman[234007]: 2025-12-01 09:11:58.392236488 +0000 UTC m=+0.132812786 container health_status e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 04:11:58 np0005540697 python3.9[234052]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:11:59 np0005540697 python3.9[234178]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764580317.7686574-969-242253453459419/.source.nft follow=False _original_basename=ruleset.j2 checksum=b82fbd2c71bb7c36c630c2301913f0f42fd2e7ce backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:11:59 np0005540697 podman[203700]: time="2025-12-01T09:11:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 04:11:59 np0005540697 podman[203700]: @ - - [01/Dec/2025:09:11:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28292 "" "Go-http-client/1.1"
Dec  1 04:11:59 np0005540697 podman[203700]: @ - - [01/Dec/2025:09:11:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4288 "" "Go-http-client/1.1"
Dec  1 04:12:00 np0005540697 python3.9[234330]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:12:00 np0005540697 podman[234338]: 2025-12-01 09:12:00.725696846 +0000 UTC m=+0.092966046 container health_status f2d8639519b361376780abb83cb5a5e341c4f412720fb5852b2a4d261a69c359 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, config_id=edpm, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, io.openshift.expose-services=, release=1214.1726694543, release-0.7.12=, vendor=Red Hat, Inc., version=9.4, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-container, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, maintainer=Red Hat, Inc.)
Dec  1 04:12:01 np0005540697 openstack_network_exporter[205866]: ERROR   09:12:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 04:12:01 np0005540697 openstack_network_exporter[205866]: ERROR   09:12:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 04:12:01 np0005540697 openstack_network_exporter[205866]: ERROR   09:12:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 04:12:01 np0005540697 openstack_network_exporter[205866]: ERROR   09:12:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 04:12:01 np0005540697 openstack_network_exporter[205866]: 
Dec  1 04:12:01 np0005540697 openstack_network_exporter[205866]: ERROR   09:12:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 04:12:01 np0005540697 openstack_network_exporter[205866]: 
Dec  1 04:12:01 np0005540697 python3.9[234501]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:12:02 np0005540697 podman[234628]: 2025-12-01 09:12:02.63781917 +0000 UTC m=+0.088128188 container health_status dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  1 04:12:02 np0005540697 python3.9[234680]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:12:03 np0005540697 python3.9[234832]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:12:04 np0005540697 python3.9[234985]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 04:12:05 np0005540697 python3.9[235139]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:12:06 np0005540697 podman[235266]: 2025-12-01 09:12:06.695607306 +0000 UTC m=+0.103550513 container health_status f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec  1 04:12:06 np0005540697 python3.9[235314]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:12:07 np0005540697 systemd[1]: session-26.scope: Deactivated successfully.
Dec  1 04:12:07 np0005540697 systemd[1]: session-26.scope: Consumed 1min 39.690s CPU time.
Dec  1 04:12:07 np0005540697 systemd-logind[792]: Session 26 logged out. Waiting for processes to exit.
Dec  1 04:12:07 np0005540697 systemd-logind[792]: Removed session 26.
Dec  1 04:12:07 np0005540697 podman[235339]: 2025-12-01 09:12:07.689313145 +0000 UTC m=+0.103795130 container health_status 110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, name=ubi9-minimal, maintainer=Red Hat, Inc., vcs-type=git, com.redhat.component=ubi9-minimal-container, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, distribution-scope=public, io.openshift.tags=minimal rhel9, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Dec  1 04:12:09 np0005540697 nova_compute[189491]: 2025-12-01 09:12:09.714 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 04:12:09 np0005540697 nova_compute[189491]: 2025-12-01 09:12:09.714 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec  1 04:12:09 np0005540697 nova_compute[189491]: 2025-12-01 09:12:09.741 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec  1 04:12:09 np0005540697 nova_compute[189491]: 2025-12-01 09:12:09.741 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 04:12:09 np0005540697 nova_compute[189491]: 2025-12-01 09:12:09.742 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  1 04:12:09 np0005540697 podman[235358]: 2025-12-01 09:12:09.748721957 +0000 UTC m=+0.105145412 container health_status 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd)
Dec  1 04:12:09 np0005540697 podman[235359]: 2025-12-01 09:12:09.766001709 +0000 UTC m=+0.125399947 container health_status 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, tcib_managed=true)
Dec  1 04:12:09 np0005540697 nova_compute[189491]: 2025-12-01 09:12:09.812 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 04:12:10 np0005540697 nova_compute[189491]: 2025-12-01 09:12:10.870 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 04:12:10 np0005540697 nova_compute[189491]: 2025-12-01 09:12:10.903 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 04:12:10 np0005540697 nova_compute[189491]: 2025-12-01 09:12:10.904 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 04:12:10 np0005540697 nova_compute[189491]: 2025-12-01 09:12:10.904 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 04:12:10 np0005540697 nova_compute[189491]: 2025-12-01 09:12:10.905 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 04:12:11 np0005540697 nova_compute[189491]: 2025-12-01 09:12:11.352 189495 WARNING nova.virt.libvirt.driver [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 04:12:11 np0005540697 nova_compute[189491]: 2025-12-01 09:12:11.353 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5719MB free_disk=72.44137573242188GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 04:12:11 np0005540697 nova_compute[189491]: 2025-12-01 09:12:11.353 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 04:12:11 np0005540697 nova_compute[189491]: 2025-12-01 09:12:11.353 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 04:12:11 np0005540697 nova_compute[189491]: 2025-12-01 09:12:11.521 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 04:12:11 np0005540697 nova_compute[189491]: 2025-12-01 09:12:11.521 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 04:12:11 np0005540697 nova_compute[189491]: 2025-12-01 09:12:11.546 189495 DEBUG nova.compute.provider_tree [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Inventory has not changed in ProviderTree for provider: 143c7fe7-af1f-477a-978c-6a994d785d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 04:12:11 np0005540697 nova_compute[189491]: 2025-12-01 09:12:11.604 189495 DEBUG nova.scheduler.client.report [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Inventory has not changed for provider 143c7fe7-af1f-477a-978c-6a994d785d98 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 04:12:11 np0005540697 nova_compute[189491]: 2025-12-01 09:12:11.608 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 04:12:11 np0005540697 nova_compute[189491]: 2025-12-01 09:12:11.609 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.255s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 04:12:12 np0005540697 nova_compute[189491]: 2025-12-01 09:12:12.454 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 04:12:12 np0005540697 nova_compute[189491]: 2025-12-01 09:12:12.713 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 04:12:12 np0005540697 nova_compute[189491]: 2025-12-01 09:12:12.714 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 04:12:12 np0005540697 nova_compute[189491]: 2025-12-01 09:12:12.714 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 04:12:12 np0005540697 nova_compute[189491]: 2025-12-01 09:12:12.714 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 04:12:12 np0005540697 nova_compute[189491]: 2025-12-01 09:12:12.715 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 04:12:13 np0005540697 systemd-logind[792]: New session 27 of user zuul.
Dec  1 04:12:13 np0005540697 systemd[1]: Started Session 27 of User zuul.
Dec  1 04:12:13 np0005540697 nova_compute[189491]: 2025-12-01 09:12:13.709 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 04:12:13 np0005540697 nova_compute[189491]: 2025-12-01 09:12:13.713 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 04:12:13 np0005540697 nova_compute[189491]: 2025-12-01 09:12:13.713 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 04:12:13 np0005540697 nova_compute[189491]: 2025-12-01 09:12:13.713 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 04:12:13 np0005540697 nova_compute[189491]: 2025-12-01 09:12:13.755 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  1 04:12:13 np0005540697 nova_compute[189491]: 2025-12-01 09:12:13.757 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 04:12:14 np0005540697 python3.9[235551]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 04:12:16 np0005540697 python3.9[235707]: ansible-ansible.builtin.systemd Invoked with name=rsyslog daemon_reload=False daemon_reexec=False scope=system no_block=False state=None enabled=None force=None masked=None
Dec  1 04:12:17 np0005540697 python3.9[235860]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  1 04:12:18 np0005540697 python3.9[235944]: ansible-ansible.legacy.dnf Invoked with name=['rsyslog-openssl'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  1 04:12:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:12:19.777 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  1 04:12:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:12:19.778 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  1 04:12:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:12:19.778 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b020>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 04:12:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:12:19.778 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7ff84c98b0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 04:12:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:12:19.779 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84dc55100>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 04:12:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:12:19.779 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 04:12:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:12:19.780 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 04:12:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:12:19.780 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 04:12:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:12:19.780 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84ca1c260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 04:12:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:12:19.780 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 04:12:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:12:19.780 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 04:12:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:12:19.780 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84ff01af0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 04:12:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:12:19.780 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bb30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 04:12:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:12:19.780 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 04:12:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:12:19.781 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bb60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 04:12:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:12:19.781 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 04:12:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:12:19.781 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bbc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 04:12:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:12:19.781 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 04:12:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:12:19.782 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 04:12:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:12:19.782 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 04:12:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:12:19.782 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 04:12:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:12:19.782 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b5f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 04:12:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:12:19.782 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 04:12:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:12:19.782 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98be60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 04:12:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:12:19.782 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84f216690>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 04:12:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:12:19.782 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c9896d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 04:12:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:12:19.783 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 04:12:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:12:19.783 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98a720>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 04:12:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:12:19.783 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 04:12:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:12:19.783 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 04:12:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:12:19.784 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7ff8501e1d00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 04:12:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:12:19.784 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 04:12:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:12:19.784 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7ff84c98b110>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 04:12:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:12:19.784 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 04:12:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:12:19.785 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7ff84c98b1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 04:12:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:12:19.785 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 04:12:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:12:19.785 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7ff84c98b200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 04:12:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:12:19.785 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 04:12:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:12:19.785 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7ff84ca1c230>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 04:12:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:12:19.785 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 04:12:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:12:19.786 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7ff84c98b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 04:12:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:12:19.786 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 04:12:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:12:19.786 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7ff84c98b2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 04:12:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:12:19.786 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 04:12:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:12:19.786 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7ff84c98b620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 04:12:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:12:19.787 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 04:12:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:12:19.787 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7ff84c98b680>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 04:12:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:12:19.787 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 04:12:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:12:19.787 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7ff84c98b320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 04:12:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:12:19.787 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 04:12:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:12:19.788 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7ff84c98b920>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 04:12:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:12:19.788 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 04:12:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:12:19.788 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7ff84c98b380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 04:12:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:12:19.788 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 04:12:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:12:19.788 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7ff84c98bb90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 04:12:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:12:19.789 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 04:12:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:12:19.789 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7ff84c98bbf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 04:12:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:12:19.789 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 04:12:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:12:19.789 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7ff84c98bc80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 04:12:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:12:19.789 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 04:12:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:12:19.790 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7ff84c98bd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 04:12:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:12:19.790 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 04:12:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:12:19.790 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7ff84c98bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 04:12:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:12:19.790 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 04:12:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:12:19.790 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7ff84c98b5c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 04:12:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:12:19.790 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 04:12:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:12:19.790 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7ff84dc55040>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 04:12:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:12:19.791 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 04:12:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:12:19.791 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7ff84c98be30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 04:12:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:12:19.791 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 04:12:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:12:19.791 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7ff8503b1b80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 04:12:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:12:19.791 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 04:12:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:12:19.791 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7ff84dab3f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 04:12:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:12:19.792 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 04:12:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:12:19.792 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7ff84c98bec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 04:12:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:12:19.792 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 04:12:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:12:19.792 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7ff84c98b170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 04:12:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:12:19.792 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 04:12:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:12:19.793 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7ff84c98bf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 04:12:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:12:19.793 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 04:12:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:12:19.793 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 04:12:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:12:19.793 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 04:12:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:12:19.793 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 04:12:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:12:19.794 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 04:12:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:12:19.794 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 04:12:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:12:19.794 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 04:12:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:12:19.794 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 04:12:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:12:19.794 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 04:12:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:12:19.794 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 04:12:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:12:19.794 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 04:12:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:12:19.794 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 04:12:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:12:19.794 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 04:12:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:12:19.794 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 04:12:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:12:19.794 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 04:12:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:12:19.794 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 04:12:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:12:19.795 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 04:12:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:12:19.795 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 04:12:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:12:19.795 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 04:12:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:12:19.795 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 04:12:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:12:19.795 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 04:12:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:12:19.795 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 04:12:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:12:19.795 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 04:12:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:12:19.795 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 04:12:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:12:19.795 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 04:12:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:12:19.795 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 04:12:19 np0005540697 ceilometer_agent_compute[200222]: 2025-12-01 09:12:19.795 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 04:12:21 np0005540697 podman[236021]: 2025-12-01 09:12:21.708720341 +0000 UTC m=+0.073968773 container health_status 6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  1 04:12:22 np0005540697 python3.9[236125]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/rsyslog/ca-openshift.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:12:23 np0005540697 podman[236220]: 2025-12-01 09:12:23.10470365 +0000 UTC m=+0.101855242 container health_status ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4)
Dec  1 04:12:23 np0005540697 python3.9[236266]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/rsyslog/ca-openshift.crt mode=0644 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764580341.5302153-54-107322953527483/.source.crt _original_basename=ca-openshift.crt follow=False checksum=1d88bab26da5c85710a770c705f3555781bf2a38 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:12:24 np0005540697 python3.9[236418]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/rsyslog.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:12:25 np0005540697 python3.9[236570]: ansible-ansible.legacy.stat Invoked with path=/etc/rsyslog.d/10-telemetry.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:12:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 09:12:26.494 106659 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 04:12:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 09:12:26.495 106659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 04:12:26 np0005540697 ovn_metadata_agent[106654]: 2025-12-01 09:12:26.495 106659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 04:12:26 np0005540697 python3.9[236693]: ansible-ansible.legacy.copy Invoked with dest=/etc/rsyslog.d/10-telemetry.conf mode=0644 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764580344.9800735-77-198330163798308/.source.conf _original_basename=10-telemetry.conf follow=False checksum=76865d9dd4bf9cd322a47065c046bcac194645ab backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 09:12:27 compute-0 python3.9[236845]: ansible-ansible.builtin.systemd Invoked with name=rsyslog.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  1 09:12:27 compute-0 systemd[1]: Stopping System Logging Service...
Dec  1 09:12:28 compute-0 rsyslogd[1006]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1006" x-info="https://www.rsyslog.com"] exiting on signal 15.
Dec  1 09:12:28 compute-0 systemd[1]: rsyslog.service: Deactivated successfully.
Dec  1 09:12:28 compute-0 systemd[1]: Stopped System Logging Service.
Dec  1 09:12:28 compute-0 systemd[1]: rsyslog.service: Consumed 4.329s CPU time, 8.2M memory peak, read 0B from disk, written 6.8M to disk.
Dec  1 09:12:28 compute-0 systemd[1]: Starting System Logging Service...
Dec  1 09:12:28 compute-0 rsyslogd[236849]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="236849" x-info="https://www.rsyslog.com"] start
Dec  1 09:12:28 compute-0 systemd[1]: Started System Logging Service.
Dec  1 09:12:28 compute-0 rsyslogd[236849]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  1 09:12:28 compute-0 rsyslogd[236849]: Warning: Certificate file is not set [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2330 ]
Dec  1 09:12:28 compute-0 rsyslogd[236849]: Warning: Key file is not set [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2331 ]
Dec  1 09:12:28 compute-0 rsyslogd[236849]: nsd_ossl: TLS Connection initiated with remote syslog server '172.17.0.80'. [v8.2510.0-2.el9]
Dec  1 09:12:28 compute-0 rsyslogd[236849]: nsd_ossl: Information, no shared curve between syslog client '172.17.0.80' and server [v8.2510.0-2.el9]
Dec  1 09:12:28 compute-0 podman[236880]: 2025-12-01 09:12:28.745196865 +0000 UTC m=+0.117069673 container health_status e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 09:12:28 compute-0 systemd[1]: session-27.scope: Deactivated successfully.
Dec  1 09:12:28 compute-0 systemd[1]: session-27.scope: Consumed 12.106s CPU time.
Dec  1 09:12:28 compute-0 systemd-logind[792]: Session 27 logged out. Waiting for processes to exit.
Dec  1 09:12:28 compute-0 systemd-logind[792]: Removed session 27.
Dec  1 09:12:29 compute-0 podman[203700]: time="2025-12-01T09:12:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 09:12:29 compute-0 podman[203700]: @ - - [01/Dec/2025:09:12:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28292 "" "Go-http-client/1.1"
Dec  1 09:12:29 compute-0 podman[203700]: @ - - [01/Dec/2025:09:12:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4283 "" "Go-http-client/1.1"
Dec  1 09:12:31 compute-0 openstack_network_exporter[205866]: ERROR   09:12:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:12:31 compute-0 openstack_network_exporter[205866]: ERROR   09:12:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:12:31 compute-0 openstack_network_exporter[205866]: ERROR   09:12:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 09:12:31 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:12:31 compute-0 openstack_network_exporter[205866]: ERROR   09:12:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 09:12:31 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:12:31 compute-0 openstack_network_exporter[205866]: ERROR   09:12:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 09:12:31 compute-0 podman[236900]: 2025-12-01 09:12:31.716021282 +0000 UTC m=+0.090225650 container health_status f2d8639519b361376780abb83cb5a5e341c4f412720fb5852b2a4d261a69c359 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, name=ubi9, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, version=9.4, release-0.7.12=, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, io.openshift.expose-services=, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc.)
Dec  1 09:12:33 compute-0 podman[236920]: 2025-12-01 09:12:33.718905736 +0000 UTC m=+0.091282635 container health_status dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  1 09:12:37 compute-0 podman[236943]: 2025-12-01 09:12:37.773954196 +0000 UTC m=+0.130677564 container health_status f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec  1 09:12:37 compute-0 podman[236961]: 2025-12-01 09:12:37.87672215 +0000 UTC m=+0.108795392 container health_status 110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, vendor=Red Hat, Inc., config_id=edpm, architecture=x86_64, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, release=1755695350)
Dec  1 09:12:40 compute-0 podman[236982]: 2025-12-01 09:12:40.705931886 +0000 UTC m=+0.082192683 container health_status 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec  1 09:12:40 compute-0 podman[236983]: 2025-12-01 09:12:40.78450456 +0000 UTC m=+0.150912178 container health_status 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec  1 09:12:52 compute-0 podman[237031]: 2025-12-01 09:12:52.741347864 +0000 UTC m=+0.103007104 container health_status 6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  1 09:12:53 compute-0 podman[237055]: 2025-12-01 09:12:53.793910235 +0000 UTC m=+0.159078027 container health_status ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, config_id=edpm, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec  1 09:12:55 compute-0 systemd-logind[792]: New session 28 of user zuul.
Dec  1 09:12:55 compute-0 systemd[1]: Started Session 28 of User zuul.
Dec  1 09:12:56 compute-0 python3[237252]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 09:12:58 compute-0 python3[237475]: ansible-ansible.legacy.command Invoked with _raw_params=tstamp=$(date -d '30 minute ago' "+%Y-%m-%d %H:%M:%S")#012journalctl -t "ceilometer_agent_compute" --no-pager -S "${tstamp}"#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 09:12:59 compute-0 podman[237595]: 2025-12-01 09:12:59.703138394 +0000 UTC m=+0.078586262 container health_status e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, tcib_managed=true, config_id=edpm)
Dec  1 09:12:59 compute-0 podman[203700]: time="2025-12-01T09:12:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 09:12:59 compute-0 podman[203700]: @ - - [01/Dec/2025:09:12:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28292 "" "Go-http-client/1.1"
Dec  1 09:12:59 compute-0 podman[203700]: @ - - [01/Dec/2025:09:12:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4286 "" "Go-http-client/1.1"
Dec  1 09:12:59 compute-0 python3[237648]: ansible-ansible.legacy.command Invoked with _raw_params=tstamp=$(date -d '30 minute ago' "+%Y-%m-%d %H:%M:%S")#012journalctl -t "nova_compute" --no-pager -S "${tstamp}"#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 09:13:01 compute-0 openstack_network_exporter[205866]: ERROR   09:13:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:13:01 compute-0 openstack_network_exporter[205866]: ERROR   09:13:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:13:01 compute-0 openstack_network_exporter[205866]: ERROR   09:13:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 09:13:01 compute-0 openstack_network_exporter[205866]: ERROR   09:13:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 09:13:01 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:13:01 compute-0 openstack_network_exporter[205866]: ERROR   09:13:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 09:13:01 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:13:02 compute-0 podman[237776]: 2025-12-01 09:13:02.692689235 +0000 UTC m=+0.087858106 container health_status f2d8639519b361376780abb83cb5a5e341c4f412720fb5852b2a4d261a69c359 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, managed_by=edpm_ansible, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release-0.7.12=, architecture=x86_64, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, release=1214.1726694543, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., config_id=edpm)
Dec  1 09:13:02 compute-0 python3[237817]: ansible-ansible.builtin.stat Invoked with path=/etc/rsyslog.d/10-telemetry.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec  1 09:13:03 compute-0 podman[237973]: 2025-12-01 09:13:03.946791463 +0000 UTC m=+0.101560689 container health_status dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  1 09:13:04 compute-0 python3[237974]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 09:13:06 compute-0 python3[238221]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep ceilometer_agent_compute#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 09:13:07 compute-0 python3[238385]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep node_exporter#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 09:13:08 compute-0 podman[238425]: 2025-12-01 09:13:08.741045135 +0000 UTC m=+0.097562613 container health_status f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  1 09:13:08 compute-0 podman[238424]: 2025-12-01 09:13:08.75279002 +0000 UTC m=+0.122769365 container health_status 110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, vendor=Red Hat, Inc., config_id=edpm, maintainer=Red Hat, Inc., version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64)
Dec  1 09:13:11 compute-0 nova_compute[189491]: 2025-12-01 09:13:11.713 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:13:11 compute-0 nova_compute[189491]: 2025-12-01 09:13:11.714 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:13:11 compute-0 nova_compute[189491]: 2025-12-01 09:13:11.749 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:13:11 compute-0 nova_compute[189491]: 2025-12-01 09:13:11.751 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:13:11 compute-0 nova_compute[189491]: 2025-12-01 09:13:11.751 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:13:11 compute-0 nova_compute[189491]: 2025-12-01 09:13:11.752 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 09:13:11 compute-0 podman[238460]: 2025-12-01 09:13:11.767363451 +0000 UTC m=+0.134842598 container health_status 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Dec  1 09:13:11 compute-0 podman[238461]: 2025-12-01 09:13:11.768714784 +0000 UTC m=+0.142671169 container health_status 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller)
Dec  1 09:13:12 compute-0 nova_compute[189491]: 2025-12-01 09:13:12.187 189495 WARNING nova.virt.libvirt.driver [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 09:13:12 compute-0 nova_compute[189491]: 2025-12-01 09:13:12.188 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5685MB free_disk=72.43580627441406GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 09:13:12 compute-0 nova_compute[189491]: 2025-12-01 09:13:12.189 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:13:12 compute-0 nova_compute[189491]: 2025-12-01 09:13:12.189 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:13:12 compute-0 nova_compute[189491]: 2025-12-01 09:13:12.322 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 09:13:12 compute-0 nova_compute[189491]: 2025-12-01 09:13:12.323 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 09:13:12 compute-0 nova_compute[189491]: 2025-12-01 09:13:12.404 189495 DEBUG nova.scheduler.client.report [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Refreshing inventories for resource provider 143c7fe7-af1f-477a-978c-6a994d785d98 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  1 09:13:12 compute-0 nova_compute[189491]: 2025-12-01 09:13:12.476 189495 DEBUG nova.scheduler.client.report [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Updating ProviderTree inventory for provider 143c7fe7-af1f-477a-978c-6a994d785d98 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  1 09:13:12 compute-0 nova_compute[189491]: 2025-12-01 09:13:12.476 189495 DEBUG nova.compute.provider_tree [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Updating inventory in ProviderTree for provider 143c7fe7-af1f-477a-978c-6a994d785d98 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  1 09:13:12 compute-0 nova_compute[189491]: 2025-12-01 09:13:12.492 189495 DEBUG nova.scheduler.client.report [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Refreshing aggregate associations for resource provider 143c7fe7-af1f-477a-978c-6a994d785d98, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  1 09:13:12 compute-0 nova_compute[189491]: 2025-12-01 09:13:12.523 189495 DEBUG nova.scheduler.client.report [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Refreshing trait associations for resource provider 143c7fe7-af1f-477a-978c-6a994d785d98, traits: COMPUTE_STORAGE_BUS_FDC,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_FMA3,HW_CPU_X86_SVM,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_AVX,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SHA,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_AVX2,HW_CPU_X86_ABM,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_USB,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_MMX,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE2,COMPUTE_ACCELERATORS,HW_CPU_X86_F16C,HW_CPU_X86_SSE42,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE41,HW_CPU_X86_BMI2,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE4A,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_CIRRUS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  1 09:13:12 compute-0 nova_compute[189491]: 2025-12-01 09:13:12.553 189495 DEBUG nova.compute.provider_tree [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Inventory has not changed in ProviderTree for provider: 143c7fe7-af1f-477a-978c-6a994d785d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 09:13:12 compute-0 nova_compute[189491]: 2025-12-01 09:13:12.580 189495 DEBUG nova.scheduler.client.report [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Inventory has not changed for provider 143c7fe7-af1f-477a-978c-6a994d785d98 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 09:13:12 compute-0 nova_compute[189491]: 2025-12-01 09:13:12.582 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 09:13:12 compute-0 nova_compute[189491]: 2025-12-01 09:13:12.582 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.393s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:13:13 compute-0 nova_compute[189491]: 2025-12-01 09:13:13.583 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:13:13 compute-0 nova_compute[189491]: 2025-12-01 09:13:13.584 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:13:13 compute-0 nova_compute[189491]: 2025-12-01 09:13:13.585 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 09:13:13 compute-0 nova_compute[189491]: 2025-12-01 09:13:13.710 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:13:13 compute-0 nova_compute[189491]: 2025-12-01 09:13:13.713 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:13:14 compute-0 nova_compute[189491]: 2025-12-01 09:13:14.713 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:13:14 compute-0 nova_compute[189491]: 2025-12-01 09:13:14.714 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 09:13:14 compute-0 nova_compute[189491]: 2025-12-01 09:13:14.714 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 09:13:14 compute-0 nova_compute[189491]: 2025-12-01 09:13:14.738 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  1 09:13:14 compute-0 nova_compute[189491]: 2025-12-01 09:13:14.739 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:13:15 compute-0 nova_compute[189491]: 2025-12-01 09:13:15.714 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:13:15 compute-0 nova_compute[189491]: 2025-12-01 09:13:15.912 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:13:23 compute-0 podman[238506]: 2025-12-01 09:13:23.749864785 +0000 UTC m=+0.119796902 container health_status 6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  1 09:13:24 compute-0 podman[238530]: 2025-12-01 09:13:24.741799162 +0000 UTC m=+0.114691639 container health_status ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute)
Dec  1 09:13:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:13:26.496 106659 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:13:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:13:26.496 106659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:13:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:13:26.496 106659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:13:29 compute-0 podman[203700]: time="2025-12-01T09:13:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 09:13:29 compute-0 podman[203700]: @ - - [01/Dec/2025:09:13:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28292 "" "Go-http-client/1.1"
Dec  1 09:13:29 compute-0 podman[203700]: @ - - [01/Dec/2025:09:13:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4290 "" "Go-http-client/1.1"
Dec  1 09:13:30 compute-0 podman[238550]: 2025-12-01 09:13:30.726853222 +0000 UTC m=+0.091058113 container health_status e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm)
Dec  1 09:13:31 compute-0 openstack_network_exporter[205866]: ERROR   09:13:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:13:31 compute-0 openstack_network_exporter[205866]: ERROR   09:13:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:13:31 compute-0 openstack_network_exporter[205866]: ERROR   09:13:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 09:13:31 compute-0 openstack_network_exporter[205866]: ERROR   09:13:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 09:13:31 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:13:31 compute-0 openstack_network_exporter[205866]: ERROR   09:13:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 09:13:31 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:13:33 compute-0 podman[238569]: 2025-12-01 09:13:33.714348616 +0000 UTC m=+0.083664395 container health_status f2d8639519b361376780abb83cb5a5e341c4f412720fb5852b2a4d261a69c359 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, distribution-scope=public, managed_by=edpm_ansible, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, release=1214.1726694543)
Dec  1 09:13:34 compute-0 podman[238591]: 2025-12-01 09:13:34.771738253 +0000 UTC m=+0.130158735 container health_status dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  1 09:13:39 compute-0 podman[238615]: 2025-12-01 09:13:39.717279682 +0000 UTC m=+0.081746809 container health_status 110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, vendor=Red Hat, Inc., version=9.6, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., vcs-type=git, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, release=1755695350, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Dec  1 09:13:39 compute-0 podman[238616]: 2025-12-01 09:13:39.744538695 +0000 UTC m=+0.099867800 container health_status f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 09:13:42 compute-0 podman[238654]: 2025-12-01 09:13:42.725459629 +0000 UTC m=+0.099507609 container health_status 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  1 09:13:42 compute-0 podman[238655]: 2025-12-01 09:13:42.759765352 +0000 UTC m=+0.125769488 container health_status 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec  1 09:13:54 compute-0 podman[238699]: 2025-12-01 09:13:54.698993603 +0000 UTC m=+0.076403847 container health_status 6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  1 09:13:55 compute-0 podman[238723]: 2025-12-01 09:13:55.71263895 +0000 UTC m=+0.087192538 container health_status ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, maintainer=OpenStack Kubernetes Operator team)
Dec  1 09:13:59 compute-0 podman[203700]: time="2025-12-01T09:13:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 09:13:59 compute-0 podman[203700]: @ - - [01/Dec/2025:09:13:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28292 "" "Go-http-client/1.1"
Dec  1 09:13:59 compute-0 podman[203700]: @ - - [01/Dec/2025:09:13:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4284 "" "Go-http-client/1.1"
Dec  1 09:14:01 compute-0 openstack_network_exporter[205866]: ERROR   09:14:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:14:01 compute-0 openstack_network_exporter[205866]: ERROR   09:14:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:14:01 compute-0 openstack_network_exporter[205866]: ERROR   09:14:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 09:14:01 compute-0 openstack_network_exporter[205866]: ERROR   09:14:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 09:14:01 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:14:01 compute-0 openstack_network_exporter[205866]: ERROR   09:14:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 09:14:01 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:14:01 compute-0 podman[238742]: 2025-12-01 09:14:01.764443259 +0000 UTC m=+0.124022572 container health_status e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, container_name=ceilometer_agent_ipmi, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec  1 09:14:04 compute-0 podman[238762]: 2025-12-01 09:14:04.74329 +0000 UTC m=+0.114599153 container health_status f2d8639519b361376780abb83cb5a5e341c4f412720fb5852b2a4d261a69c359 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, architecture=x86_64, distribution-scope=public, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-type=git, io.buildah.version=1.29.0, io.openshift.expose-services=, vendor=Red Hat, Inc., config_id=edpm, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543, container_name=kepler, com.redhat.component=ubi9-container)
Dec  1 09:14:05 compute-0 podman[238781]: 2025-12-01 09:14:05.713481808 +0000 UTC m=+0.086862320 container health_status dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 09:14:07 compute-0 systemd[1]: session-28.scope: Deactivated successfully.
Dec  1 09:14:07 compute-0 systemd[1]: session-28.scope: Consumed 10.878s CPU time.
Dec  1 09:14:07 compute-0 systemd-logind[792]: Session 28 logged out. Waiting for processes to exit.
Dec  1 09:14:07 compute-0 systemd-logind[792]: Removed session 28.
Dec  1 09:14:10 compute-0 podman[238805]: 2025-12-01 09:14:10.734051637 +0000 UTC m=+0.084475522 container health_status f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec  1 09:14:10 compute-0 podman[238804]: 2025-12-01 09:14:10.73624941 +0000 UTC m=+0.102367676 container health_status 110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, name=ubi9-minimal, vcs-type=git, container_name=openstack_network_exporter, distribution-scope=public, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, architecture=x86_64, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container)
Dec  1 09:14:12 compute-0 nova_compute[189491]: 2025-12-01 09:14:12.717 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:14:13 compute-0 podman[238843]: 2025-12-01 09:14:13.801977732 +0000 UTC m=+0.168808411 container health_status 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 09:14:13 compute-0 podman[238844]: 2025-12-01 09:14:13.819183859 +0000 UTC m=+0.175039991 container health_status 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, container_name=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec  1 09:14:14 compute-0 nova_compute[189491]: 2025-12-01 09:14:14.054 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:14:14 compute-0 nova_compute[189491]: 2025-12-01 09:14:14.056 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:14:14 compute-0 nova_compute[189491]: 2025-12-01 09:14:14.058 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:14:14 compute-0 nova_compute[189491]: 2025-12-01 09:14:14.059 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 09:14:14 compute-0 nova_compute[189491]: 2025-12-01 09:14:14.475 189495 WARNING nova.virt.libvirt.driver [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 09:14:14 compute-0 nova_compute[189491]: 2025-12-01 09:14:14.477 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5708MB free_disk=72.4417610168457GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 09:14:14 compute-0 nova_compute[189491]: 2025-12-01 09:14:14.477 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:14:14 compute-0 nova_compute[189491]: 2025-12-01 09:14:14.477 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:14:15 compute-0 nova_compute[189491]: 2025-12-01 09:14:15.142 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 09:14:15 compute-0 nova_compute[189491]: 2025-12-01 09:14:15.143 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 09:14:15 compute-0 nova_compute[189491]: 2025-12-01 09:14:15.166 189495 DEBUG nova.compute.provider_tree [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Inventory has not changed in ProviderTree for provider: 143c7fe7-af1f-477a-978c-6a994d785d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 09:14:15 compute-0 nova_compute[189491]: 2025-12-01 09:14:15.406 189495 DEBUG nova.scheduler.client.report [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Inventory has not changed for provider 143c7fe7-af1f-477a-978c-6a994d785d98 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 09:14:15 compute-0 nova_compute[189491]: 2025-12-01 09:14:15.408 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 09:14:15 compute-0 nova_compute[189491]: 2025-12-01 09:14:15.409 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.931s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:14:16 compute-0 nova_compute[189491]: 2025-12-01 09:14:16.406 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:14:16 compute-0 nova_compute[189491]: 2025-12-01 09:14:16.407 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:14:16 compute-0 nova_compute[189491]: 2025-12-01 09:14:16.408 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 09:14:16 compute-0 nova_compute[189491]: 2025-12-01 09:14:16.408 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 09:14:16 compute-0 nova_compute[189491]: 2025-12-01 09:14:16.751 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  1 09:14:16 compute-0 nova_compute[189491]: 2025-12-01 09:14:16.752 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:14:16 compute-0 nova_compute[189491]: 2025-12-01 09:14:16.753 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:14:16 compute-0 nova_compute[189491]: 2025-12-01 09:14:16.753 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:14:16 compute-0 nova_compute[189491]: 2025-12-01 09:14:16.753 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:14:16 compute-0 nova_compute[189491]: 2025-12-01 09:14:16.754 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:14:16 compute-0 nova_compute[189491]: 2025-12-01 09:14:16.754 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 09:14:17 compute-0 nova_compute[189491]: 2025-12-01 09:14:17.716 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:14:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:14:19.778 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  1 09:14:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:14:19.779 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  1 09:14:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:14:19.779 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b020>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:14:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:14:19.780 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7ff84c98b0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:14:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:14:19.780 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84dc55100>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:14:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:14:19.781 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:14:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:14:19.781 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:14:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:14:19.781 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:14:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:14:19.781 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84ca1c260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:14:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:14:19.781 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:14:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:14:19.781 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:14:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:14:19.782 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84ff01af0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:14:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:14:19.782 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bb30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:14:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:14:19.782 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:14:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:14:19.782 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bb60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:14:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:14:19.782 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:14:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:14:19.783 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bbc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:14:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:14:19.783 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:14:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:14:19.783 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:14:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:14:19.783 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:14:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:14:19.784 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:14:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:14:19.784 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b5f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:14:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:14:19.784 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:14:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:14:19.784 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98be60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:14:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:14:19.784 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84f216690>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:14:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:14:19.785 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c9896d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:14:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:14:19.785 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:14:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:14:19.785 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98a720>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:14:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:14:19.785 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:14:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:14:19.783 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:14:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:14:19.786 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7ff8501e1d00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:14:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:14:19.786 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:14:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:14:19.786 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7ff84c98b110>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:14:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:14:19.786 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:14:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:14:19.786 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7ff84c98b1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:14:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:14:19.786 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:14:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:14:19.786 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7ff84c98b200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:14:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:14:19.786 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:14:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:14:19.786 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7ff84ca1c230>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:14:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:14:19.786 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:14:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:14:19.787 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7ff84c98b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:14:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:14:19.787 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:14:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:14:19.787 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7ff84c98b2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:14:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:14:19.787 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:14:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:14:19.787 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7ff84c98b620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:14:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:14:19.787 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:14:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:14:19.787 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7ff84c98b680>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:14:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:14:19.788 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:14:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:14:19.788 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7ff84c98b320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:14:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:14:19.788 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:14:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:14:19.788 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7ff84c98b920>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:14:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:14:19.788 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:14:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:14:19.788 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7ff84c98b380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:14:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:14:19.788 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:14:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:14:19.788 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7ff84c98bb90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:14:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:14:19.789 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:14:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:14:19.789 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7ff84c98bbf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:14:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:14:19.789 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:14:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:14:19.789 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7ff84c98bc80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:14:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:14:19.789 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:14:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:14:19.789 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7ff84c98bd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:14:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:14:19.789 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:14:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:14:19.789 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7ff84c98bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:14:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:14:19.789 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:14:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:14:19.790 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7ff84c98b5c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:14:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:14:19.790 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:14:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:14:19.790 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7ff84dc55040>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:14:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:14:19.790 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:14:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:14:19.790 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7ff84c98be30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:14:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:14:19.790 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:14:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:14:19.790 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7ff8503b1b80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:14:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:14:19.790 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:14:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:14:19.790 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7ff84dab3f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:14:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:14:19.790 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:14:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:14:19.791 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7ff84c98bec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:14:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:14:19.791 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:14:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:14:19.791 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7ff84c98b170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:14:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:14:19.791 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:14:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:14:19.791 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7ff84c98bf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:14:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:14:19.791 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:14:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:14:19.791 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:14:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:14:19.792 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:14:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:14:19.792 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:14:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:14:19.792 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:14:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:14:19.792 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:14:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:14:19.792 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:14:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:14:19.792 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:14:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:14:19.792 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:14:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:14:19.792 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:14:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:14:19.793 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:14:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:14:19.793 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:14:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:14:19.793 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:14:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:14:19.793 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:14:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:14:19.793 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:14:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:14:19.793 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:14:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:14:19.793 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:14:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:14:19.793 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:14:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:14:19.793 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:14:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:14:19.793 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:14:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:14:19.794 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:14:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:14:19.794 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:14:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:14:19.794 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:14:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:14:19.794 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:14:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:14:19.794 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:14:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:14:19.794 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:14:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:14:19.794 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:14:25 compute-0 podman[238889]: 2025-12-01 09:14:25.720055393 +0000 UTC m=+0.081422998 container health_status 6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  1 09:14:25 compute-0 podman[238913]: 2025-12-01 09:14:25.860679168 +0000 UTC m=+0.091989374 container health_status ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4)
Dec  1 09:14:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:14:26.497 106659 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:14:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:14:26.497 106659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:14:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:14:26.498 106659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:14:29 compute-0 podman[203700]: time="2025-12-01T09:14:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 09:14:29 compute-0 podman[203700]: @ - - [01/Dec/2025:09:14:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28292 "" "Go-http-client/1.1"
Dec  1 09:14:29 compute-0 podman[203700]: @ - - [01/Dec/2025:09:14:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4293 "" "Go-http-client/1.1"
Dec  1 09:14:31 compute-0 openstack_network_exporter[205866]: ERROR   09:14:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 09:14:31 compute-0 openstack_network_exporter[205866]: ERROR   09:14:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:14:31 compute-0 openstack_network_exporter[205866]: ERROR   09:14:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:14:31 compute-0 openstack_network_exporter[205866]: ERROR   09:14:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 09:14:31 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:14:31 compute-0 openstack_network_exporter[205866]: ERROR   09:14:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 09:14:31 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:14:32 compute-0 podman[238934]: 2025-12-01 09:14:32.718620682 +0000 UTC m=+0.095575673 container health_status e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 09:14:35 compute-0 podman[238954]: 2025-12-01 09:14:35.745157631 +0000 UTC m=+0.112235596 container health_status f2d8639519b361376780abb83cb5a5e341c4f412720fb5852b2a4d261a69c359 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, managed_by=edpm_ansible, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.openshift.tags=base rhel9, io.openshift.expose-services=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.buildah.version=1.29.0, version=9.4, container_name=kepler, config_id=edpm, release-0.7.12=)
Dec  1 09:14:35 compute-0 podman[238974]: 2025-12-01 09:14:35.91723957 +0000 UTC m=+0.127152989 container health_status dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  1 09:14:41 compute-0 podman[238999]: 2025-12-01 09:14:41.740890937 +0000 UTC m=+0.104183390 container health_status 110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, name=ubi9-minimal, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., architecture=x86_64, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, config_id=edpm, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, distribution-scope=public)
Dec  1 09:14:41 compute-0 podman[239000]: 2025-12-01 09:14:41.745602942 +0000 UTC m=+0.106913428 container health_status f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec  1 09:14:44 compute-0 podman[239034]: 2025-12-01 09:14:44.790783284 +0000 UTC m=+0.149491351 container health_status 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec  1 09:14:44 compute-0 podman[239035]: 2025-12-01 09:14:44.830471097 +0000 UTC m=+0.173877293 container health_status 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  1 09:14:56 compute-0 podman[239081]: 2025-12-01 09:14:56.717861214 +0000 UTC m=+0.088123020 container health_status 6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  1 09:14:56 compute-0 podman[239082]: 2025-12-01 09:14:56.778829765 +0000 UTC m=+0.144438368 container health_status ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  1 09:14:59 compute-0 podman[203700]: time="2025-12-01T09:14:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 09:14:59 compute-0 podman[203700]: @ - - [01/Dec/2025:09:14:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28292 "" "Go-http-client/1.1"
Dec  1 09:14:59 compute-0 podman[203700]: @ - - [01/Dec/2025:09:14:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4289 "" "Go-http-client/1.1"
Dec  1 09:15:01 compute-0 openstack_network_exporter[205866]: ERROR   09:15:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 09:15:01 compute-0 openstack_network_exporter[205866]: ERROR   09:15:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:15:01 compute-0 openstack_network_exporter[205866]: ERROR   09:15:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:15:01 compute-0 openstack_network_exporter[205866]: ERROR   09:15:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 09:15:01 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:15:01 compute-0 openstack_network_exporter[205866]: ERROR   09:15:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 09:15:01 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:15:03 compute-0 podman[239122]: 2025-12-01 09:15:03.73359019 +0000 UTC m=+0.106995060 container health_status e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  1 09:15:06 compute-0 podman[239142]: 2025-12-01 09:15:06.704631133 +0000 UTC m=+0.072558503 container health_status dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  1 09:15:06 compute-0 podman[239143]: 2025-12-01 09:15:06.717309611 +0000 UTC m=+0.068493375 container health_status f2d8639519b361376780abb83cb5a5e341c4f412720fb5852b2a4d261a69c359 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, io.openshift.expose-services=, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., name=ubi9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, io.openshift.tags=base rhel9, architecture=x86_64, version=9.4, container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, config_id=edpm, distribution-scope=public, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git)
Dec  1 09:15:12 compute-0 podman[239180]: 2025-12-01 09:15:12.774876328 +0000 UTC m=+0.134196119 container health_status 110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, name=ubi9-minimal, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, release=1755695350, com.redhat.component=ubi9-minimal-container, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., version=9.6, io.openshift.expose-services=, architecture=x86_64, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git)
Dec  1 09:15:12 compute-0 podman[239181]: 2025-12-01 09:15:12.77371802 +0000 UTC m=+0.125174530 container health_status f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec  1 09:15:13 compute-0 nova_compute[189491]: 2025-12-01 09:15:13.714 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:15:13 compute-0 nova_compute[189491]: 2025-12-01 09:15:13.714 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:15:13 compute-0 nova_compute[189491]: 2025-12-01 09:15:13.715 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:15:13 compute-0 nova_compute[189491]: 2025-12-01 09:15:13.870 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:15:13 compute-0 nova_compute[189491]: 2025-12-01 09:15:13.871 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:15:13 compute-0 nova_compute[189491]: 2025-12-01 09:15:13.871 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:15:13 compute-0 nova_compute[189491]: 2025-12-01 09:15:13.872 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 09:15:14 compute-0 nova_compute[189491]: 2025-12-01 09:15:14.336 189495 WARNING nova.virt.libvirt.driver [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 09:15:14 compute-0 nova_compute[189491]: 2025-12-01 09:15:14.337 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5703MB free_disk=72.44157791137695GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 09:15:14 compute-0 nova_compute[189491]: 2025-12-01 09:15:14.337 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:15:14 compute-0 nova_compute[189491]: 2025-12-01 09:15:14.338 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:15:14 compute-0 nova_compute[189491]: 2025-12-01 09:15:14.403 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 09:15:14 compute-0 nova_compute[189491]: 2025-12-01 09:15:14.403 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 09:15:14 compute-0 nova_compute[189491]: 2025-12-01 09:15:14.433 189495 DEBUG nova.compute.provider_tree [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Inventory has not changed in ProviderTree for provider: 143c7fe7-af1f-477a-978c-6a994d785d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 09:15:14 compute-0 nova_compute[189491]: 2025-12-01 09:15:14.486 189495 DEBUG nova.scheduler.client.report [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Inventory has not changed for provider 143c7fe7-af1f-477a-978c-6a994d785d98 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 09:15:14 compute-0 nova_compute[189491]: 2025-12-01 09:15:14.489 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 09:15:14 compute-0 nova_compute[189491]: 2025-12-01 09:15:14.489 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.151s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:15:15 compute-0 nova_compute[189491]: 2025-12-01 09:15:15.491 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:15:15 compute-0 nova_compute[189491]: 2025-12-01 09:15:15.491 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 09:15:15 compute-0 nova_compute[189491]: 2025-12-01 09:15:15.491 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 09:15:15 compute-0 nova_compute[189491]: 2025-12-01 09:15:15.521 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  1 09:15:15 compute-0 nova_compute[189491]: 2025-12-01 09:15:15.714 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:15:15 compute-0 nova_compute[189491]: 2025-12-01 09:15:15.714 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 09:15:15 compute-0 podman[239217]: 2025-12-01 09:15:15.753561846 +0000 UTC m=+0.111653842 container health_status 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=multipathd, org.label-schema.build-date=20251125)
Dec  1 09:15:15 compute-0 podman[239218]: 2025-12-01 09:15:15.790630416 +0000 UTC m=+0.142012819 container health_status 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  1 09:15:16 compute-0 nova_compute[189491]: 2025-12-01 09:15:16.710 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:15:16 compute-0 nova_compute[189491]: 2025-12-01 09:15:16.714 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:15:17 compute-0 nova_compute[189491]: 2025-12-01 09:15:17.709 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:15:18 compute-0 nova_compute[189491]: 2025-12-01 09:15:18.216 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:15:18 compute-0 nova_compute[189491]: 2025-12-01 09:15:18.716 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:15:18 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:15:18.937 106659 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:2b:76', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'f6:fe:a3:90:0a:20'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 09:15:18 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:15:18.939 106659 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  1 09:15:18 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:15:18.940 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=203a4433-d8f4-4d80-8084-548a6d57cd5d, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:15:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:15:26.498 106659 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:15:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:15:26.499 106659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:15:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:15:26.499 106659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:15:27 compute-0 podman[239263]: 2025-12-01 09:15:27.738240432 +0000 UTC m=+0.091996634 container health_status 6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  1 09:15:27 compute-0 podman[239264]: 2025-12-01 09:15:27.770751722 +0000 UTC m=+0.115600408 container health_status ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Dec  1 09:15:29 compute-0 podman[203700]: time="2025-12-01T09:15:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 09:15:29 compute-0 podman[203700]: @ - - [01/Dec/2025:09:15:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28292 "" "Go-http-client/1.1"
Dec  1 09:15:29 compute-0 podman[203700]: @ - - [01/Dec/2025:09:15:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4295 "" "Go-http-client/1.1"
Dec  1 09:15:31 compute-0 openstack_network_exporter[205866]: ERROR   09:15:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:15:31 compute-0 openstack_network_exporter[205866]: ERROR   09:15:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:15:31 compute-0 openstack_network_exporter[205866]: ERROR   09:15:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 09:15:31 compute-0 openstack_network_exporter[205866]: ERROR   09:15:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 09:15:31 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:15:31 compute-0 openstack_network_exporter[205866]: ERROR   09:15:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 09:15:31 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:15:34 compute-0 podman[239301]: 2025-12-01 09:15:34.738526202 +0000 UTC m=+0.109782616 container health_status e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  1 09:15:37 compute-0 podman[239325]: 2025-12-01 09:15:37.766640701 +0000 UTC m=+0.124030123 container health_status f2d8639519b361376780abb83cb5a5e341c4f412720fb5852b2a4d261a69c359 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., io.buildah.version=1.29.0, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, name=ubi9, release=1214.1726694543, architecture=x86_64, io.openshift.expose-services=, config_id=edpm, distribution-scope=public, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, version=9.4)
Dec  1 09:15:37 compute-0 podman[239324]: 2025-12-01 09:15:37.784909964 +0000 UTC m=+0.147716328 container health_status dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  1 09:15:43 compute-0 podman[239365]: 2025-12-01 09:15:43.760509903 +0000 UTC m=+0.116641804 container health_status f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true)
Dec  1 09:15:43 compute-0 podman[239364]: 2025-12-01 09:15:43.765756769 +0000 UTC m=+0.137078238 container health_status 110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., vcs-type=git, version=9.6, managed_by=edpm_ansible, name=ubi9-minimal, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container)
Dec  1 09:15:46 compute-0 podman[239402]: 2025-12-01 09:15:46.759547086 +0000 UTC m=+0.125189011 container health_status 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Dec  1 09:15:46 compute-0 podman[239403]: 2025-12-01 09:15:46.835975931 +0000 UTC m=+0.184916141 container health_status 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 09:15:58 compute-0 podman[239448]: 2025-12-01 09:15:58.757309513 +0000 UTC m=+0.120861216 container health_status 6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  1 09:15:58 compute-0 podman[239449]: 2025-12-01 09:15:58.773760052 +0000 UTC m=+0.131227517 container health_status ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Dec  1 09:15:59 compute-0 podman[203700]: time="2025-12-01T09:15:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 09:15:59 compute-0 podman[203700]: @ - - [01/Dec/2025:09:15:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28292 "" "Go-http-client/1.1"
Dec  1 09:15:59 compute-0 podman[203700]: @ - - [01/Dec/2025:09:15:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4294 "" "Go-http-client/1.1"
Dec  1 09:16:01 compute-0 openstack_network_exporter[205866]: ERROR   09:16:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:16:01 compute-0 openstack_network_exporter[205866]: ERROR   09:16:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:16:01 compute-0 openstack_network_exporter[205866]: ERROR   09:16:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 09:16:01 compute-0 openstack_network_exporter[205866]: ERROR   09:16:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 09:16:01 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:16:01 compute-0 openstack_network_exporter[205866]: ERROR   09:16:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 09:16:01 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:16:05 compute-0 podman[239493]: 2025-12-01 09:16:05.772396031 +0000 UTC m=+0.125268876 container health_status e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm)
Dec  1 09:16:08 compute-0 podman[239512]: 2025-12-01 09:16:08.740630449 +0000 UTC m=+0.110683303 container health_status dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  1 09:16:08 compute-0 podman[239513]: 2025-12-01 09:16:08.754349691 +0000 UTC m=+0.122908149 container health_status f2d8639519b361376780abb83cb5a5e341c4f412720fb5852b2a4d261a69c359 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of Red Hat Universal Base Image 9., name=ubi9, io.k8s.display-name=Red Hat Universal Base Image 9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, architecture=x86_64, distribution-scope=public, build-date=2024-09-18T21:23:30, container_name=kepler, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, io.openshift.tags=base rhel9, io.openshift.expose-services=, config_id=edpm, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4)
Dec  1 09:16:13 compute-0 nova_compute[189491]: 2025-12-01 09:16:13.714 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:16:14 compute-0 nova_compute[189491]: 2025-12-01 09:16:14.714 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:16:14 compute-0 nova_compute[189491]: 2025-12-01 09:16:14.751 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:16:14 compute-0 nova_compute[189491]: 2025-12-01 09:16:14.752 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:16:14 compute-0 nova_compute[189491]: 2025-12-01 09:16:14.752 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:16:14 compute-0 nova_compute[189491]: 2025-12-01 09:16:14.752 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 09:16:15 compute-0 podman[239556]: 2025-12-01 09:16:15.096305498 +0000 UTC m=+0.441925573 container health_status f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec  1 09:16:15 compute-0 podman[239555]: 2025-12-01 09:16:15.130073314 +0000 UTC m=+0.486400097 container health_status 110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, name=ubi9-minimal, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, vendor=Red Hat, Inc., managed_by=edpm_ansible)
Dec  1 09:16:15 compute-0 nova_compute[189491]: 2025-12-01 09:16:15.172 189495 WARNING nova.virt.libvirt.driver [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 09:16:15 compute-0 nova_compute[189491]: 2025-12-01 09:16:15.173 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5669MB free_disk=72.44162368774414GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 09:16:15 compute-0 nova_compute[189491]: 2025-12-01 09:16:15.174 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:16:15 compute-0 nova_compute[189491]: 2025-12-01 09:16:15.174 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:16:15 compute-0 nova_compute[189491]: 2025-12-01 09:16:15.231 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 09:16:15 compute-0 nova_compute[189491]: 2025-12-01 09:16:15.232 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 09:16:15 compute-0 nova_compute[189491]: 2025-12-01 09:16:15.258 189495 DEBUG nova.compute.provider_tree [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Inventory has not changed in ProviderTree for provider: 143c7fe7-af1f-477a-978c-6a994d785d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 09:16:15 compute-0 nova_compute[189491]: 2025-12-01 09:16:15.273 189495 DEBUG nova.scheduler.client.report [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Inventory has not changed for provider 143c7fe7-af1f-477a-978c-6a994d785d98 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 09:16:15 compute-0 nova_compute[189491]: 2025-12-01 09:16:15.276 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 09:16:15 compute-0 nova_compute[189491]: 2025-12-01 09:16:15.277 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.103s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:16:15 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:16:15.838 106659 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:2b:76', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'f6:fe:a3:90:0a:20'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 09:16:15 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:16:15.839 106659 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  1 09:16:16 compute-0 nova_compute[189491]: 2025-12-01 09:16:16.278 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:16:16 compute-0 nova_compute[189491]: 2025-12-01 09:16:16.714 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:16:16 compute-0 nova_compute[189491]: 2025-12-01 09:16:16.715 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 09:16:16 compute-0 nova_compute[189491]: 2025-12-01 09:16:16.715 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 09:16:17 compute-0 nova_compute[189491]: 2025-12-01 09:16:17.193 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  1 09:16:17 compute-0 nova_compute[189491]: 2025-12-01 09:16:17.713 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:16:17 compute-0 nova_compute[189491]: 2025-12-01 09:16:17.714 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:16:17 compute-0 nova_compute[189491]: 2025-12-01 09:16:17.715 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:16:17 compute-0 nova_compute[189491]: 2025-12-01 09:16:17.715 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 09:16:17 compute-0 podman[239592]: 2025-12-01 09:16:17.765217018 +0000 UTC m=+0.134040357 container health_status 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 09:16:17 compute-0 podman[239593]: 2025-12-01 09:16:17.812708255 +0000 UTC m=+0.164556805 container health_status 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller)
Dec  1 09:16:18 compute-0 nova_compute[189491]: 2025-12-01 09:16:18.710 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:16:18 compute-0 nova_compute[189491]: 2025-12-01 09:16:18.713 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:16:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:16:19.778 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  1 09:16:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:16:19.779 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  1 09:16:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:16:19.779 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b020>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:16:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:16:19.779 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7ff84c98b0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:16:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:16:19.779 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84dc55100>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:16:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:16:19.780 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:16:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:16:19.780 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:16:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:16:19.780 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:16:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:16:19.781 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84ca1c260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:16:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:16:19.781 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:16:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:16:19.781 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:16:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:16:19.781 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84ff01af0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:16:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:16:19.781 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bb30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:16:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:16:19.781 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:16:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:16:19.781 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bb60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:16:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:16:19.781 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:16:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:16:19.781 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bbc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:16:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:16:19.781 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:16:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:16:19.781 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:16:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:16:19.782 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:16:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:16:19.782 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:16:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:16:19.782 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b5f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:16:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:16:19.782 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:16:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:16:19.782 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98be60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:16:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:16:19.782 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84f216690>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:16:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:16:19.782 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c9896d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:16:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:16:19.782 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:16:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:16:19.782 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98a720>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:16:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:16:19.782 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:16:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:16:19.783 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:16:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:16:19.783 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7ff8501e1d00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:16:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:16:19.784 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:16:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:16:19.784 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7ff84c98b110>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:16:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:16:19.784 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:16:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:16:19.784 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7ff84c98b1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:16:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:16:19.784 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:16:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:16:19.784 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7ff84c98b200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:16:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:16:19.784 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:16:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:16:19.785 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7ff84ca1c230>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:16:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:16:19.785 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:16:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:16:19.785 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7ff84c98b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:16:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:16:19.785 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:16:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:16:19.785 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7ff84c98b2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:16:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:16:19.785 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:16:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:16:19.785 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7ff84c98b620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:16:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:16:19.786 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:16:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:16:19.786 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7ff84c98b680>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:16:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:16:19.786 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:16:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:16:19.786 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7ff84c98b320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:16:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:16:19.786 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:16:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:16:19.786 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7ff84c98b920>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:16:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:16:19.786 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:16:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:16:19.787 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7ff84c98b380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:16:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:16:19.787 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:16:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:16:19.787 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7ff84c98bb90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:16:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:16:19.787 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:16:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:16:19.787 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7ff84c98bbf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:16:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:16:19.787 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:16:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:16:19.788 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7ff84c98bc80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:16:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:16:19.788 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:16:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:16:19.788 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7ff84c98bd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:16:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:16:19.788 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:16:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:16:19.788 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7ff84c98bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:16:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:16:19.788 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:16:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:16:19.788 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7ff84c98b5c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:16:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:16:19.788 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:16:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:16:19.789 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7ff84dc55040>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:16:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:16:19.789 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:16:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:16:19.789 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7ff84c98be30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:16:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:16:19.789 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:16:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:16:19.789 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7ff8503b1b80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:16:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:16:19.789 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:16:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:16:19.789 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7ff84dab3f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:16:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:16:19.789 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:16:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:16:19.790 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7ff84c98bec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:16:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:16:19.790 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:16:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:16:19.790 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7ff84c98b170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:16:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:16:19.790 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:16:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:16:19.790 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7ff84c98bf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:16:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:16:19.790 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:16:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:16:19.791 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:16:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:16:19.791 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:16:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:16:19.791 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:16:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:16:19.791 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:16:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:16:19.791 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:16:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:16:19.791 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:16:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:16:19.791 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:16:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:16:19.791 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:16:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:16:19.791 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:16:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:16:19.791 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:16:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:16:19.791 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:16:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:16:19.792 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:16:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:16:19.792 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:16:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:16:19.792 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:16:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:16:19.792 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:16:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:16:19.792 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:16:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:16:19.792 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:16:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:16:19.792 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:16:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:16:19.792 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:16:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:16:19.792 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:16:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:16:19.792 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:16:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:16:19.792 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:16:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:16:19.793 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:16:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:16:19.793 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:16:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:16:19.793 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:16:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:16:19.793 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:16:24 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:16:24.842 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=203a4433-d8f4-4d80-8084-548a6d57cd5d, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:16:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:16:26.500 106659 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:16:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:16:26.501 106659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:16:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:16:26.502 106659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:16:26 compute-0 nova_compute[189491]: 2025-12-01 09:16:26.965 189495 DEBUG oslo_concurrency.lockutils [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Acquiring lock "7ed22ffd-011d-48d7-962a-8606e471a59e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:16:26 compute-0 nova_compute[189491]: 2025-12-01 09:16:26.966 189495 DEBUG oslo_concurrency.lockutils [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lock "7ed22ffd-011d-48d7-962a-8606e471a59e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:16:26 compute-0 nova_compute[189491]: 2025-12-01 09:16:26.987 189495 DEBUG nova.compute.manager [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 7ed22ffd-011d-48d7-962a-8606e471a59e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  1 09:16:27 compute-0 nova_compute[189491]: 2025-12-01 09:16:27.098 189495 DEBUG oslo_concurrency.lockutils [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:16:27 compute-0 nova_compute[189491]: 2025-12-01 09:16:27.100 189495 DEBUG oslo_concurrency.lockutils [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:16:27 compute-0 nova_compute[189491]: 2025-12-01 09:16:27.112 189495 DEBUG nova.virt.hardware [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  1 09:16:27 compute-0 nova_compute[189491]: 2025-12-01 09:16:27.113 189495 INFO nova.compute.claims [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 7ed22ffd-011d-48d7-962a-8606e471a59e] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  1 09:16:27 compute-0 nova_compute[189491]: 2025-12-01 09:16:27.249 189495 DEBUG nova.compute.provider_tree [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Inventory has not changed in ProviderTree for provider: 143c7fe7-af1f-477a-978c-6a994d785d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 09:16:27 compute-0 nova_compute[189491]: 2025-12-01 09:16:27.266 189495 DEBUG nova.scheduler.client.report [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Inventory has not changed for provider 143c7fe7-af1f-477a-978c-6a994d785d98 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 09:16:27 compute-0 nova_compute[189491]: 2025-12-01 09:16:27.291 189495 DEBUG oslo_concurrency.lockutils [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.192s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:16:27 compute-0 nova_compute[189491]: 2025-12-01 09:16:27.293 189495 DEBUG nova.compute.manager [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 7ed22ffd-011d-48d7-962a-8606e471a59e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  1 09:16:27 compute-0 nova_compute[189491]: 2025-12-01 09:16:27.344 189495 DEBUG nova.compute.manager [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 7ed22ffd-011d-48d7-962a-8606e471a59e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  1 09:16:27 compute-0 nova_compute[189491]: 2025-12-01 09:16:27.345 189495 DEBUG nova.network.neutron [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 7ed22ffd-011d-48d7-962a-8606e471a59e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  1 09:16:27 compute-0 nova_compute[189491]: 2025-12-01 09:16:27.378 189495 INFO nova.virt.libvirt.driver [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 7ed22ffd-011d-48d7-962a-8606e471a59e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  1 09:16:27 compute-0 nova_compute[189491]: 2025-12-01 09:16:27.426 189495 DEBUG nova.compute.manager [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 7ed22ffd-011d-48d7-962a-8606e471a59e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  1 09:16:27 compute-0 nova_compute[189491]: 2025-12-01 09:16:27.506 189495 DEBUG nova.compute.manager [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 7ed22ffd-011d-48d7-962a-8606e471a59e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  1 09:16:27 compute-0 nova_compute[189491]: 2025-12-01 09:16:27.510 189495 DEBUG nova.virt.libvirt.driver [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 7ed22ffd-011d-48d7-962a-8606e471a59e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  1 09:16:27 compute-0 nova_compute[189491]: 2025-12-01 09:16:27.511 189495 INFO nova.virt.libvirt.driver [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 7ed22ffd-011d-48d7-962a-8606e471a59e] Creating image(s)#033[00m
Dec  1 09:16:27 compute-0 nova_compute[189491]: 2025-12-01 09:16:27.514 189495 DEBUG oslo_concurrency.lockutils [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Acquiring lock "/var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:16:27 compute-0 nova_compute[189491]: 2025-12-01 09:16:27.515 189495 DEBUG oslo_concurrency.lockutils [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lock "/var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:16:27 compute-0 nova_compute[189491]: 2025-12-01 09:16:27.516 189495 DEBUG oslo_concurrency.lockutils [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lock "/var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:16:27 compute-0 nova_compute[189491]: 2025-12-01 09:16:27.516 189495 DEBUG oslo_concurrency.lockutils [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Acquiring lock "ace19d5a50ba51d9cdb8d0e36f5ab43d2c0f33b5" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:16:27 compute-0 nova_compute[189491]: 2025-12-01 09:16:27.517 189495 DEBUG oslo_concurrency.lockutils [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lock "ace19d5a50ba51d9cdb8d0e36f5ab43d2c0f33b5" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:16:27 compute-0 nova_compute[189491]: 2025-12-01 09:16:27.951 189495 WARNING oslo_policy.policy [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m
Dec  1 09:16:27 compute-0 nova_compute[189491]: 2025-12-01 09:16:27.952 189495 WARNING oslo_policy.policy [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m
Dec  1 09:16:28 compute-0 nova_compute[189491]: 2025-12-01 09:16:28.716 189495 DEBUG nova.network.neutron [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 7ed22ffd-011d-48d7-962a-8606e471a59e] Successfully created port: 1632735e-15c5-4d6b-a450-baa001b88ac2 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  1 09:16:28 compute-0 nova_compute[189491]: 2025-12-01 09:16:28.831 189495 DEBUG oslo_concurrency.processutils [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ace19d5a50ba51d9cdb8d0e36f5ab43d2c0f33b5.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:16:28 compute-0 nova_compute[189491]: 2025-12-01 09:16:28.903 189495 DEBUG oslo_concurrency.processutils [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ace19d5a50ba51d9cdb8d0e36f5ab43d2c0f33b5.part --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:16:28 compute-0 nova_compute[189491]: 2025-12-01 09:16:28.905 189495 DEBUG nova.virt.images [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] 304c689d-2799-45ae-8166-517d5fd107b2 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Dec  1 09:16:28 compute-0 nova_compute[189491]: 2025-12-01 09:16:28.906 189495 DEBUG nova.privsep.utils [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Dec  1 09:16:28 compute-0 nova_compute[189491]: 2025-12-01 09:16:28.906 189495 DEBUG oslo_concurrency.processutils [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/ace19d5a50ba51d9cdb8d0e36f5ab43d2c0f33b5.part /var/lib/nova/instances/_base/ace19d5a50ba51d9cdb8d0e36f5ab43d2c0f33b5.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:16:29 compute-0 nova_compute[189491]: 2025-12-01 09:16:29.148 189495 DEBUG oslo_concurrency.processutils [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/ace19d5a50ba51d9cdb8d0e36f5ab43d2c0f33b5.part /var/lib/nova/instances/_base/ace19d5a50ba51d9cdb8d0e36f5ab43d2c0f33b5.converted" returned: 0 in 0.242s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:16:29 compute-0 nova_compute[189491]: 2025-12-01 09:16:29.155 189495 DEBUG oslo_concurrency.processutils [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ace19d5a50ba51d9cdb8d0e36f5ab43d2c0f33b5.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:16:29 compute-0 nova_compute[189491]: 2025-12-01 09:16:29.249 189495 DEBUG oslo_concurrency.processutils [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ace19d5a50ba51d9cdb8d0e36f5ab43d2c0f33b5.converted --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:16:29 compute-0 nova_compute[189491]: 2025-12-01 09:16:29.250 189495 DEBUG oslo_concurrency.lockutils [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lock "ace19d5a50ba51d9cdb8d0e36f5ab43d2c0f33b5" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 1.733s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:16:29 compute-0 nova_compute[189491]: 2025-12-01 09:16:29.276 189495 INFO oslo.privsep.daemon [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'nova.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmpabo4ni44/privsep.sock']#033[00m
Dec  1 09:16:29 compute-0 podman[203700]: time="2025-12-01T09:16:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 09:16:29 compute-0 podman[203700]: @ - - [01/Dec/2025:09:16:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28292 "" "Go-http-client/1.1"
Dec  1 09:16:29 compute-0 podman[203700]: @ - - [01/Dec/2025:09:16:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4309 "" "Go-http-client/1.1"
Dec  1 09:16:29 compute-0 podman[239658]: 2025-12-01 09:16:29.75989505 +0000 UTC m=+0.128255139 container health_status 6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  1 09:16:29 compute-0 podman[239659]: 2025-12-01 09:16:29.762135484 +0000 UTC m=+0.121634628 container health_status ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.4)
Dec  1 09:16:30 compute-0 nova_compute[189491]: 2025-12-01 09:16:30.055 189495 INFO oslo.privsep.daemon [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Spawned new privsep daemon via rootwrap#033[00m
Dec  1 09:16:30 compute-0 nova_compute[189491]: 2025-12-01 09:16:29.937 239700 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Dec  1 09:16:30 compute-0 nova_compute[189491]: 2025-12-01 09:16:29.944 239700 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Dec  1 09:16:30 compute-0 nova_compute[189491]: 2025-12-01 09:16:29.948 239700 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m
Dec  1 09:16:30 compute-0 nova_compute[189491]: 2025-12-01 09:16:29.948 239700 INFO oslo.privsep.daemon [-] privsep daemon running as pid 239700#033[00m
Dec  1 09:16:30 compute-0 nova_compute[189491]: 2025-12-01 09:16:30.146 189495 DEBUG oslo_concurrency.processutils [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ace19d5a50ba51d9cdb8d0e36f5ab43d2c0f33b5 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:16:30 compute-0 nova_compute[189491]: 2025-12-01 09:16:30.200 189495 DEBUG oslo_concurrency.processutils [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ace19d5a50ba51d9cdb8d0e36f5ab43d2c0f33b5 --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:16:30 compute-0 nova_compute[189491]: 2025-12-01 09:16:30.203 189495 DEBUG oslo_concurrency.lockutils [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Acquiring lock "ace19d5a50ba51d9cdb8d0e36f5ab43d2c0f33b5" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:16:30 compute-0 nova_compute[189491]: 2025-12-01 09:16:30.204 189495 DEBUG oslo_concurrency.lockutils [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lock "ace19d5a50ba51d9cdb8d0e36f5ab43d2c0f33b5" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:16:30 compute-0 nova_compute[189491]: 2025-12-01 09:16:30.229 189495 DEBUG oslo_concurrency.processutils [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ace19d5a50ba51d9cdb8d0e36f5ab43d2c0f33b5 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:16:30 compute-0 nova_compute[189491]: 2025-12-01 09:16:30.330 189495 DEBUG oslo_concurrency.processutils [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ace19d5a50ba51d9cdb8d0e36f5ab43d2c0f33b5 --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:16:30 compute-0 nova_compute[189491]: 2025-12-01 09:16:30.333 189495 DEBUG oslo_concurrency.processutils [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ace19d5a50ba51d9cdb8d0e36f5ab43d2c0f33b5,backing_fmt=raw /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:16:30 compute-0 nova_compute[189491]: 2025-12-01 09:16:30.387 189495 DEBUG oslo_concurrency.processutils [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ace19d5a50ba51d9cdb8d0e36f5ab43d2c0f33b5,backing_fmt=raw /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk 1073741824" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:16:30 compute-0 nova_compute[189491]: 2025-12-01 09:16:30.389 189495 DEBUG oslo_concurrency.lockutils [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lock "ace19d5a50ba51d9cdb8d0e36f5ab43d2c0f33b5" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.185s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:16:30 compute-0 nova_compute[189491]: 2025-12-01 09:16:30.390 189495 DEBUG oslo_concurrency.processutils [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ace19d5a50ba51d9cdb8d0e36f5ab43d2c0f33b5 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:16:30 compute-0 nova_compute[189491]: 2025-12-01 09:16:30.457 189495 DEBUG oslo_concurrency.processutils [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ace19d5a50ba51d9cdb8d0e36f5ab43d2c0f33b5 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:16:30 compute-0 nova_compute[189491]: 2025-12-01 09:16:30.459 189495 DEBUG nova.virt.disk.api [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Checking if we can resize image /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Dec  1 09:16:30 compute-0 nova_compute[189491]: 2025-12-01 09:16:30.460 189495 DEBUG oslo_concurrency.processutils [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:16:30 compute-0 nova_compute[189491]: 2025-12-01 09:16:30.552 189495 DEBUG oslo_concurrency.processutils [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:16:30 compute-0 nova_compute[189491]: 2025-12-01 09:16:30.554 189495 DEBUG nova.virt.disk.api [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Cannot resize image /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Dec  1 09:16:30 compute-0 nova_compute[189491]: 2025-12-01 09:16:30.556 189495 DEBUG nova.objects.instance [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lazy-loading 'migration_context' on Instance uuid 7ed22ffd-011d-48d7-962a-8606e471a59e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 09:16:30 compute-0 nova_compute[189491]: 2025-12-01 09:16:30.575 189495 DEBUG oslo_concurrency.lockutils [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Acquiring lock "/var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:16:30 compute-0 nova_compute[189491]: 2025-12-01 09:16:30.576 189495 DEBUG oslo_concurrency.lockutils [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lock "/var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:16:30 compute-0 nova_compute[189491]: 2025-12-01 09:16:30.578 189495 DEBUG oslo_concurrency.lockutils [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lock "/var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:16:30 compute-0 nova_compute[189491]: 2025-12-01 09:16:30.579 189495 DEBUG oslo_concurrency.lockutils [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:16:30 compute-0 nova_compute[189491]: 2025-12-01 09:16:30.581 189495 DEBUG oslo_concurrency.lockutils [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:16:30 compute-0 nova_compute[189491]: 2025-12-01 09:16:30.582 189495 DEBUG oslo_concurrency.processutils [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/_base/ephemeral_1_0706d66 1G execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:16:30 compute-0 nova_compute[189491]: 2025-12-01 09:16:30.615 189495 DEBUG oslo_concurrency.processutils [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/_base/ephemeral_1_0706d66 1G" returned: 0 in 0.033s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:16:30 compute-0 nova_compute[189491]: 2025-12-01 09:16:30.616 189495 DEBUG oslo_concurrency.processutils [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Running cmd (subprocess): mkfs -t vfat -n ephemeral0 /var/lib/nova/instances/_base/ephemeral_1_0706d66 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:16:30 compute-0 nova_compute[189491]: 2025-12-01 09:16:30.665 189495 DEBUG oslo_concurrency.processutils [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] CMD "mkfs -t vfat -n ephemeral0 /var/lib/nova/instances/_base/ephemeral_1_0706d66" returned: 0 in 0.049s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:16:30 compute-0 nova_compute[189491]: 2025-12-01 09:16:30.667 189495 DEBUG oslo_concurrency.lockutils [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.086s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:16:30 compute-0 nova_compute[189491]: 2025-12-01 09:16:30.694 189495 DEBUG oslo_concurrency.processutils [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:16:30 compute-0 nova_compute[189491]: 2025-12-01 09:16:30.777 189495 DEBUG oslo_concurrency.processutils [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:16:30 compute-0 nova_compute[189491]: 2025-12-01 09:16:30.779 189495 DEBUG oslo_concurrency.lockutils [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:16:30 compute-0 nova_compute[189491]: 2025-12-01 09:16:30.780 189495 DEBUG oslo_concurrency.lockutils [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:16:30 compute-0 nova_compute[189491]: 2025-12-01 09:16:30.804 189495 DEBUG oslo_concurrency.processutils [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:16:30 compute-0 nova_compute[189491]: 2025-12-01 09:16:30.888 189495 DEBUG oslo_concurrency.processutils [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:16:30 compute-0 nova_compute[189491]: 2025-12-01 09:16:30.891 189495 DEBUG oslo_concurrency.processutils [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:16:30 compute-0 nova_compute[189491]: 2025-12-01 09:16:30.947 189495 DEBUG oslo_concurrency.processutils [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk.eph0 1073741824" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:16:30 compute-0 nova_compute[189491]: 2025-12-01 09:16:30.949 189495 DEBUG oslo_concurrency.lockutils [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.169s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:16:30 compute-0 nova_compute[189491]: 2025-12-01 09:16:30.950 189495 DEBUG oslo_concurrency.processutils [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:16:31 compute-0 nova_compute[189491]: 2025-12-01 09:16:31.025 189495 DEBUG oslo_concurrency.processutils [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:16:31 compute-0 nova_compute[189491]: 2025-12-01 09:16:31.027 189495 DEBUG nova.virt.libvirt.driver [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 7ed22ffd-011d-48d7-962a-8606e471a59e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  1 09:16:31 compute-0 nova_compute[189491]: 2025-12-01 09:16:31.028 189495 DEBUG nova.virt.libvirt.driver [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 7ed22ffd-011d-48d7-962a-8606e471a59e] Ensure instance console log exists: /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  1 09:16:31 compute-0 nova_compute[189491]: 2025-12-01 09:16:31.029 189495 DEBUG oslo_concurrency.lockutils [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:16:31 compute-0 nova_compute[189491]: 2025-12-01 09:16:31.030 189495 DEBUG oslo_concurrency.lockutils [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:16:31 compute-0 nova_compute[189491]: 2025-12-01 09:16:31.031 189495 DEBUG oslo_concurrency.lockutils [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:16:31 compute-0 nova_compute[189491]: 2025-12-01 09:16:31.168 189495 DEBUG nova.network.neutron [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 7ed22ffd-011d-48d7-962a-8606e471a59e] Successfully updated port: 1632735e-15c5-4d6b-a450-baa001b88ac2 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  1 09:16:31 compute-0 nova_compute[189491]: 2025-12-01 09:16:31.187 189495 DEBUG oslo_concurrency.lockutils [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Acquiring lock "refresh_cache-7ed22ffd-011d-48d7-962a-8606e471a59e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 09:16:31 compute-0 nova_compute[189491]: 2025-12-01 09:16:31.187 189495 DEBUG oslo_concurrency.lockutils [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Acquired lock "refresh_cache-7ed22ffd-011d-48d7-962a-8606e471a59e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 09:16:31 compute-0 nova_compute[189491]: 2025-12-01 09:16:31.188 189495 DEBUG nova.network.neutron [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 7ed22ffd-011d-48d7-962a-8606e471a59e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  1 09:16:31 compute-0 openstack_network_exporter[205866]: ERROR   09:16:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 09:16:31 compute-0 openstack_network_exporter[205866]: ERROR   09:16:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:16:31 compute-0 openstack_network_exporter[205866]: ERROR   09:16:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:16:31 compute-0 openstack_network_exporter[205866]: ERROR   09:16:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 09:16:31 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:16:31 compute-0 openstack_network_exporter[205866]: ERROR   09:16:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 09:16:31 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:16:31 compute-0 nova_compute[189491]: 2025-12-01 09:16:31.774 189495 DEBUG nova.compute.manager [req-48634356-8183-4028-86a1-a7f95756c089 req-6cce7073-c9a8-4065-8384-c48df9d19119 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 7ed22ffd-011d-48d7-962a-8606e471a59e] Received event network-changed-1632735e-15c5-4d6b-a450-baa001b88ac2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 09:16:31 compute-0 nova_compute[189491]: 2025-12-01 09:16:31.774 189495 DEBUG nova.compute.manager [req-48634356-8183-4028-86a1-a7f95756c089 req-6cce7073-c9a8-4065-8384-c48df9d19119 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 7ed22ffd-011d-48d7-962a-8606e471a59e] Refreshing instance network info cache due to event network-changed-1632735e-15c5-4d6b-a450-baa001b88ac2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  1 09:16:31 compute-0 nova_compute[189491]: 2025-12-01 09:16:31.775 189495 DEBUG oslo_concurrency.lockutils [req-48634356-8183-4028-86a1-a7f95756c089 req-6cce7073-c9a8-4065-8384-c48df9d19119 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Acquiring lock "refresh_cache-7ed22ffd-011d-48d7-962a-8606e471a59e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 09:16:31 compute-0 nova_compute[189491]: 2025-12-01 09:16:31.901 189495 DEBUG nova.network.neutron [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 7ed22ffd-011d-48d7-962a-8606e471a59e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  1 09:16:33 compute-0 nova_compute[189491]: 2025-12-01 09:16:33.027 189495 DEBUG nova.network.neutron [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 7ed22ffd-011d-48d7-962a-8606e471a59e] Updating instance_info_cache with network_info: [{"id": "1632735e-15c5-4d6b-a450-baa001b88ac2", "address": "fa:16:3e:d4:bd:b4", "network": {"id": "52d15875-2a2e-463a-bc5d-8fa6b8466bff", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.55", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fac95b8a995a4174bfa966a8d9d9aa01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1632735e-15", "ovs_interfaceid": "1632735e-15c5-4d6b-a450-baa001b88ac2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 09:16:33 compute-0 nova_compute[189491]: 2025-12-01 09:16:33.058 189495 DEBUG oslo_concurrency.lockutils [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Releasing lock "refresh_cache-7ed22ffd-011d-48d7-962a-8606e471a59e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 09:16:33 compute-0 nova_compute[189491]: 2025-12-01 09:16:33.059 189495 DEBUG nova.compute.manager [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 7ed22ffd-011d-48d7-962a-8606e471a59e] Instance network_info: |[{"id": "1632735e-15c5-4d6b-a450-baa001b88ac2", "address": "fa:16:3e:d4:bd:b4", "network": {"id": "52d15875-2a2e-463a-bc5d-8fa6b8466bff", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.55", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fac95b8a995a4174bfa966a8d9d9aa01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1632735e-15", "ovs_interfaceid": "1632735e-15c5-4d6b-a450-baa001b88ac2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  1 09:16:33 compute-0 nova_compute[189491]: 2025-12-01 09:16:33.061 189495 DEBUG oslo_concurrency.lockutils [req-48634356-8183-4028-86a1-a7f95756c089 req-6cce7073-c9a8-4065-8384-c48df9d19119 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Acquired lock "refresh_cache-7ed22ffd-011d-48d7-962a-8606e471a59e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 09:16:33 compute-0 nova_compute[189491]: 2025-12-01 09:16:33.061 189495 DEBUG nova.network.neutron [req-48634356-8183-4028-86a1-a7f95756c089 req-6cce7073-c9a8-4065-8384-c48df9d19119 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 7ed22ffd-011d-48d7-962a-8606e471a59e] Refreshing network info cache for port 1632735e-15c5-4d6b-a450-baa001b88ac2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  1 09:16:33 compute-0 nova_compute[189491]: 2025-12-01 09:16:33.069 189495 DEBUG nova.virt.libvirt.driver [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 7ed22ffd-011d-48d7-962a-8606e471a59e] Start _get_guest_xml network_info=[{"id": "1632735e-15c5-4d6b-a450-baa001b88ac2", "address": "fa:16:3e:d4:bd:b4", "network": {"id": "52d15875-2a2e-463a-bc5d-8fa6b8466bff", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.55", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fac95b8a995a4174bfa966a8d9d9aa01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1632735e-15", "ovs_interfaceid": "1632735e-15c5-4d6b-a450-baa001b88ac2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-01T09:15:08Z,direct_url=<?>,disk_format='qcow2',id=304c689d-2799-45ae-8166-517d5fd107b2,min_disk=0,min_ram=0,name='cirros',owner='fac95b8a995a4174bfa966a8d9d9aa01',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-01T09:15:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'guest_format': None, 'encryption_options': None, 'device_type': 'disk', 'encryption_secret_uuid': None, 'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'image_id': '304c689d-2799-45ae-8166-517d5fd107b2'}], 'ephemerals': [{'size': 1, 'encrypted': False, 'guest_format': None, 'encryption_options': None, 'device_type': 'disk', 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'device_name': '/dev/vdb', 'encryption_format': None}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  1 09:16:33 compute-0 nova_compute[189491]: 2025-12-01 09:16:33.084 189495 WARNING nova.virt.libvirt.driver [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 09:16:33 compute-0 nova_compute[189491]: 2025-12-01 09:16:33.105 189495 DEBUG nova.virt.libvirt.host [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  1 09:16:33 compute-0 nova_compute[189491]: 2025-12-01 09:16:33.107 189495 DEBUG nova.virt.libvirt.host [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  1 09:16:33 compute-0 nova_compute[189491]: 2025-12-01 09:16:33.114 189495 DEBUG nova.virt.libvirt.host [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  1 09:16:33 compute-0 nova_compute[189491]: 2025-12-01 09:16:33.115 189495 DEBUG nova.virt.libvirt.host [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  1 09:16:33 compute-0 nova_compute[189491]: 2025-12-01 09:16:33.116 189495 DEBUG nova.virt.libvirt.driver [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  1 09:16:33 compute-0 nova_compute[189491]: 2025-12-01 09:16:33.117 189495 DEBUG nova.virt.hardware [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-01T09:15:13Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='719a52fe-7f4b-48c0-b9dc-6a91d4ec600c',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-01T09:15:08Z,direct_url=<?>,disk_format='qcow2',id=304c689d-2799-45ae-8166-517d5fd107b2,min_disk=0,min_ram=0,name='cirros',owner='fac95b8a995a4174bfa966a8d9d9aa01',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-01T09:15:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  1 09:16:33 compute-0 nova_compute[189491]: 2025-12-01 09:16:33.118 189495 DEBUG nova.virt.hardware [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  1 09:16:33 compute-0 nova_compute[189491]: 2025-12-01 09:16:33.119 189495 DEBUG nova.virt.hardware [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  1 09:16:33 compute-0 nova_compute[189491]: 2025-12-01 09:16:33.119 189495 DEBUG nova.virt.hardware [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  1 09:16:33 compute-0 nova_compute[189491]: 2025-12-01 09:16:33.120 189495 DEBUG nova.virt.hardware [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  1 09:16:33 compute-0 nova_compute[189491]: 2025-12-01 09:16:33.121 189495 DEBUG nova.virt.hardware [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  1 09:16:33 compute-0 nova_compute[189491]: 2025-12-01 09:16:33.122 189495 DEBUG nova.virt.hardware [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  1 09:16:33 compute-0 nova_compute[189491]: 2025-12-01 09:16:33.123 189495 DEBUG nova.virt.hardware [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  1 09:16:33 compute-0 nova_compute[189491]: 2025-12-01 09:16:33.124 189495 DEBUG nova.virt.hardware [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  1 09:16:33 compute-0 nova_compute[189491]: 2025-12-01 09:16:33.124 189495 DEBUG nova.virt.hardware [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  1 09:16:33 compute-0 nova_compute[189491]: 2025-12-01 09:16:33.125 189495 DEBUG nova.virt.hardware [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  1 09:16:33 compute-0 nova_compute[189491]: 2025-12-01 09:16:33.131 189495 DEBUG nova.privsep.utils [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Dec  1 09:16:33 compute-0 nova_compute[189491]: 2025-12-01 09:16:33.133 189495 DEBUG nova.virt.libvirt.vif [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-01T09:16:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='test_0',display_name='test_0',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='test-0',id=1,image_ref='304c689d-2799-45ae-8166-517d5fd107b2',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fac95b8a995a4174bfa966a8d9d9aa01',ramdisk_id='',reservation_id='r-tw90szn6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,admin,reader',image_base_image_ref='304c689d-2799-45ae-8166-517d5fd107b2',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-01T09:16:27Z,user_data=None,user_id='962a55152ff34fdda5eae1f8aee3a7a9',uuid=7ed22ffd-011d-48d7-962a-8606e471a59e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1632735e-15c5-4d6b-a450-baa001b88ac2", "address": "fa:16:3e:d4:bd:b4", "network": {"id": "52d15875-2a2e-463a-bc5d-8fa6b8466bff", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.55", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fac95b8a995a4174bfa966a8d9d9aa01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1632735e-15", "ovs_interfaceid": "1632735e-15c5-4d6b-a450-baa001b88ac2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  1 09:16:33 compute-0 nova_compute[189491]: 2025-12-01 09:16:33.134 189495 DEBUG nova.network.os_vif_util [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Converting VIF {"id": "1632735e-15c5-4d6b-a450-baa001b88ac2", "address": "fa:16:3e:d4:bd:b4", "network": {"id": "52d15875-2a2e-463a-bc5d-8fa6b8466bff", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.55", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fac95b8a995a4174bfa966a8d9d9aa01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1632735e-15", "ovs_interfaceid": "1632735e-15c5-4d6b-a450-baa001b88ac2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 09:16:33 compute-0 nova_compute[189491]: 2025-12-01 09:16:33.136 189495 DEBUG nova.network.os_vif_util [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d4:bd:b4,bridge_name='br-int',has_traffic_filtering=True,id=1632735e-15c5-4d6b-a450-baa001b88ac2,network=Network(52d15875-2a2e-463a-bc5d-8fa6b8466bff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1632735e-15') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 09:16:33 compute-0 nova_compute[189491]: 2025-12-01 09:16:33.139 189495 DEBUG nova.objects.instance [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7ed22ffd-011d-48d7-962a-8606e471a59e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 09:16:33 compute-0 nova_compute[189491]: 2025-12-01 09:16:33.261 189495 DEBUG nova.virt.libvirt.driver [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 7ed22ffd-011d-48d7-962a-8606e471a59e] End _get_guest_xml xml=<domain type="kvm">
Dec  1 09:16:33 compute-0 nova_compute[189491]:  <uuid>7ed22ffd-011d-48d7-962a-8606e471a59e</uuid>
Dec  1 09:16:33 compute-0 nova_compute[189491]:  <name>instance-00000001</name>
Dec  1 09:16:33 compute-0 nova_compute[189491]:  <memory>524288</memory>
Dec  1 09:16:33 compute-0 nova_compute[189491]:  <vcpu>1</vcpu>
Dec  1 09:16:33 compute-0 nova_compute[189491]:  <metadata>
Dec  1 09:16:33 compute-0 nova_compute[189491]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  1 09:16:33 compute-0 nova_compute[189491]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  1 09:16:33 compute-0 nova_compute[189491]:      <nova:name>test_0</nova:name>
Dec  1 09:16:33 compute-0 nova_compute[189491]:      <nova:creationTime>2025-12-01 09:16:33</nova:creationTime>
Dec  1 09:16:33 compute-0 nova_compute[189491]:      <nova:flavor name="m1.small">
Dec  1 09:16:33 compute-0 nova_compute[189491]:        <nova:memory>512</nova:memory>
Dec  1 09:16:33 compute-0 nova_compute[189491]:        <nova:disk>1</nova:disk>
Dec  1 09:16:33 compute-0 nova_compute[189491]:        <nova:swap>0</nova:swap>
Dec  1 09:16:33 compute-0 nova_compute[189491]:        <nova:ephemeral>1</nova:ephemeral>
Dec  1 09:16:33 compute-0 nova_compute[189491]:        <nova:vcpus>1</nova:vcpus>
Dec  1 09:16:33 compute-0 nova_compute[189491]:      </nova:flavor>
Dec  1 09:16:33 compute-0 nova_compute[189491]:      <nova:owner>
Dec  1 09:16:33 compute-0 nova_compute[189491]:        <nova:user uuid="962a55152ff34fdda5eae1f8aee3a7a9">admin</nova:user>
Dec  1 09:16:33 compute-0 nova_compute[189491]:        <nova:project uuid="fac95b8a995a4174bfa966a8d9d9aa01">admin</nova:project>
Dec  1 09:16:33 compute-0 nova_compute[189491]:      </nova:owner>
Dec  1 09:16:33 compute-0 nova_compute[189491]:      <nova:root type="image" uuid="304c689d-2799-45ae-8166-517d5fd107b2"/>
Dec  1 09:16:33 compute-0 nova_compute[189491]:      <nova:ports>
Dec  1 09:16:33 compute-0 nova_compute[189491]:        <nova:port uuid="1632735e-15c5-4d6b-a450-baa001b88ac2">
Dec  1 09:16:33 compute-0 nova_compute[189491]:          <nova:ip type="fixed" address="192.168.0.55" ipVersion="4"/>
Dec  1 09:16:33 compute-0 nova_compute[189491]:        </nova:port>
Dec  1 09:16:33 compute-0 nova_compute[189491]:      </nova:ports>
Dec  1 09:16:33 compute-0 nova_compute[189491]:    </nova:instance>
Dec  1 09:16:33 compute-0 nova_compute[189491]:  </metadata>
Dec  1 09:16:33 compute-0 nova_compute[189491]:  <sysinfo type="smbios">
Dec  1 09:16:33 compute-0 nova_compute[189491]:    <system>
Dec  1 09:16:33 compute-0 nova_compute[189491]:      <entry name="manufacturer">RDO</entry>
Dec  1 09:16:33 compute-0 nova_compute[189491]:      <entry name="product">OpenStack Compute</entry>
Dec  1 09:16:33 compute-0 nova_compute[189491]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  1 09:16:33 compute-0 nova_compute[189491]:      <entry name="serial">7ed22ffd-011d-48d7-962a-8606e471a59e</entry>
Dec  1 09:16:33 compute-0 nova_compute[189491]:      <entry name="uuid">7ed22ffd-011d-48d7-962a-8606e471a59e</entry>
Dec  1 09:16:33 compute-0 nova_compute[189491]:      <entry name="family">Virtual Machine</entry>
Dec  1 09:16:33 compute-0 nova_compute[189491]:    </system>
Dec  1 09:16:33 compute-0 nova_compute[189491]:  </sysinfo>
Dec  1 09:16:33 compute-0 nova_compute[189491]:  <os>
Dec  1 09:16:33 compute-0 nova_compute[189491]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  1 09:16:33 compute-0 nova_compute[189491]:    <boot dev="hd"/>
Dec  1 09:16:33 compute-0 nova_compute[189491]:    <smbios mode="sysinfo"/>
Dec  1 09:16:33 compute-0 nova_compute[189491]:  </os>
Dec  1 09:16:33 compute-0 nova_compute[189491]:  <features>
Dec  1 09:16:33 compute-0 nova_compute[189491]:    <acpi/>
Dec  1 09:16:33 compute-0 nova_compute[189491]:    <apic/>
Dec  1 09:16:33 compute-0 nova_compute[189491]:    <vmcoreinfo/>
Dec  1 09:16:33 compute-0 nova_compute[189491]:  </features>
Dec  1 09:16:33 compute-0 nova_compute[189491]:  <clock offset="utc">
Dec  1 09:16:33 compute-0 nova_compute[189491]:    <timer name="pit" tickpolicy="delay"/>
Dec  1 09:16:33 compute-0 nova_compute[189491]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  1 09:16:33 compute-0 nova_compute[189491]:    <timer name="hpet" present="no"/>
Dec  1 09:16:33 compute-0 nova_compute[189491]:  </clock>
Dec  1 09:16:33 compute-0 nova_compute[189491]:  <cpu mode="host-model" match="exact">
Dec  1 09:16:33 compute-0 nova_compute[189491]:    <topology sockets="1" cores="1" threads="1"/>
Dec  1 09:16:33 compute-0 nova_compute[189491]:  </cpu>
Dec  1 09:16:33 compute-0 nova_compute[189491]:  <devices>
Dec  1 09:16:33 compute-0 nova_compute[189491]:    <disk type="file" device="disk">
Dec  1 09:16:33 compute-0 nova_compute[189491]:      <driver name="qemu" type="qcow2" cache="none"/>
Dec  1 09:16:33 compute-0 nova_compute[189491]:      <source file="/var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk"/>
Dec  1 09:16:33 compute-0 nova_compute[189491]:      <target dev="vda" bus="virtio"/>
Dec  1 09:16:33 compute-0 nova_compute[189491]:    </disk>
Dec  1 09:16:33 compute-0 nova_compute[189491]:    <disk type="file" device="disk">
Dec  1 09:16:33 compute-0 nova_compute[189491]:      <driver name="qemu" type="qcow2" cache="none"/>
Dec  1 09:16:33 compute-0 nova_compute[189491]:      <source file="/var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk.eph0"/>
Dec  1 09:16:33 compute-0 nova_compute[189491]:      <target dev="vdb" bus="virtio"/>
Dec  1 09:16:33 compute-0 nova_compute[189491]:    </disk>
Dec  1 09:16:33 compute-0 nova_compute[189491]:    <disk type="file" device="cdrom">
Dec  1 09:16:33 compute-0 nova_compute[189491]:      <driver name="qemu" type="raw" cache="none"/>
Dec  1 09:16:33 compute-0 nova_compute[189491]:      <source file="/var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk.config"/>
Dec  1 09:16:33 compute-0 nova_compute[189491]:      <target dev="sda" bus="sata"/>
Dec  1 09:16:33 compute-0 nova_compute[189491]:    </disk>
Dec  1 09:16:33 compute-0 nova_compute[189491]:    <interface type="ethernet">
Dec  1 09:16:33 compute-0 nova_compute[189491]:      <mac address="fa:16:3e:d4:bd:b4"/>
Dec  1 09:16:33 compute-0 nova_compute[189491]:      <model type="virtio"/>
Dec  1 09:16:33 compute-0 nova_compute[189491]:      <driver name="vhost" rx_queue_size="512"/>
Dec  1 09:16:33 compute-0 nova_compute[189491]:      <mtu size="1442"/>
Dec  1 09:16:33 compute-0 nova_compute[189491]:      <target dev="tap1632735e-15"/>
Dec  1 09:16:33 compute-0 nova_compute[189491]:    </interface>
Dec  1 09:16:33 compute-0 nova_compute[189491]:    <serial type="pty">
Dec  1 09:16:33 compute-0 nova_compute[189491]:      <log file="/var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/console.log" append="off"/>
Dec  1 09:16:33 compute-0 nova_compute[189491]:    </serial>
Dec  1 09:16:33 compute-0 nova_compute[189491]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  1 09:16:33 compute-0 nova_compute[189491]:    <video>
Dec  1 09:16:33 compute-0 nova_compute[189491]:      <model type="virtio"/>
Dec  1 09:16:33 compute-0 nova_compute[189491]:    </video>
Dec  1 09:16:33 compute-0 nova_compute[189491]:    <input type="tablet" bus="usb"/>
Dec  1 09:16:33 compute-0 nova_compute[189491]:    <rng model="virtio">
Dec  1 09:16:33 compute-0 nova_compute[189491]:      <backend model="random">/dev/urandom</backend>
Dec  1 09:16:33 compute-0 nova_compute[189491]:    </rng>
Dec  1 09:16:33 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root"/>
Dec  1 09:16:33 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:16:33 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:16:33 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:16:33 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:16:33 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:16:33 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:16:33 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:16:33 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:16:33 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:16:33 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:16:33 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:16:33 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:16:33 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:16:33 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:16:33 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:16:33 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:16:33 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:16:33 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:16:33 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:16:33 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:16:33 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:16:33 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:16:33 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:16:33 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:16:33 compute-0 nova_compute[189491]:    <controller type="usb" index="0"/>
Dec  1 09:16:33 compute-0 nova_compute[189491]:    <memballoon model="virtio">
Dec  1 09:16:33 compute-0 nova_compute[189491]:      <stats period="10"/>
Dec  1 09:16:33 compute-0 nova_compute[189491]:    </memballoon>
Dec  1 09:16:33 compute-0 nova_compute[189491]:  </devices>
Dec  1 09:16:33 compute-0 nova_compute[189491]: </domain>
Dec  1 09:16:33 compute-0 nova_compute[189491]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  1 09:16:33 compute-0 nova_compute[189491]: 2025-12-01 09:16:33.262 189495 DEBUG nova.compute.manager [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 7ed22ffd-011d-48d7-962a-8606e471a59e] Preparing to wait for external event network-vif-plugged-1632735e-15c5-4d6b-a450-baa001b88ac2 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  1 09:16:33 compute-0 nova_compute[189491]: 2025-12-01 09:16:33.263 189495 DEBUG oslo_concurrency.lockutils [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Acquiring lock "7ed22ffd-011d-48d7-962a-8606e471a59e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:16:33 compute-0 nova_compute[189491]: 2025-12-01 09:16:33.263 189495 DEBUG oslo_concurrency.lockutils [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lock "7ed22ffd-011d-48d7-962a-8606e471a59e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:16:33 compute-0 nova_compute[189491]: 2025-12-01 09:16:33.264 189495 DEBUG oslo_concurrency.lockutils [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lock "7ed22ffd-011d-48d7-962a-8606e471a59e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:16:33 compute-0 nova_compute[189491]: 2025-12-01 09:16:33.265 189495 DEBUG nova.virt.libvirt.vif [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-01T09:16:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='test_0',display_name='test_0',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='test-0',id=1,image_ref='304c689d-2799-45ae-8166-517d5fd107b2',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fac95b8a995a4174bfa966a8d9d9aa01',ramdisk_id='',reservation_id='r-tw90szn6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,admin,reader',image_base_image_ref='304c689d-2799-45ae-8166-517d5fd107b2',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-01T09:16:27Z,user_data=None,user_id='962a55152ff34fdda5eae1f8aee3a7a9',uuid=7ed22ffd-011d-48d7-962a-8606e471a59e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1632735e-15c5-4d6b-a450-baa001b88ac2", "address": "fa:16:3e:d4:bd:b4", "network": {"id": "52d15875-2a2e-463a-bc5d-8fa6b8466bff", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.55", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fac95b8a995a4174bfa966a8d9d9aa01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1632735e-15", "ovs_interfaceid": "1632735e-15c5-4d6b-a450-baa001b88ac2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  1 09:16:33 compute-0 nova_compute[189491]: 2025-12-01 09:16:33.266 189495 DEBUG nova.network.os_vif_util [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Converting VIF {"id": "1632735e-15c5-4d6b-a450-baa001b88ac2", "address": "fa:16:3e:d4:bd:b4", "network": {"id": "52d15875-2a2e-463a-bc5d-8fa6b8466bff", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.55", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fac95b8a995a4174bfa966a8d9d9aa01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1632735e-15", "ovs_interfaceid": "1632735e-15c5-4d6b-a450-baa001b88ac2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 09:16:33 compute-0 nova_compute[189491]: 2025-12-01 09:16:33.267 189495 DEBUG nova.network.os_vif_util [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d4:bd:b4,bridge_name='br-int',has_traffic_filtering=True,id=1632735e-15c5-4d6b-a450-baa001b88ac2,network=Network(52d15875-2a2e-463a-bc5d-8fa6b8466bff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1632735e-15') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 09:16:33 compute-0 nova_compute[189491]: 2025-12-01 09:16:33.268 189495 DEBUG os_vif [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d4:bd:b4,bridge_name='br-int',has_traffic_filtering=True,id=1632735e-15c5-4d6b-a450-baa001b88ac2,network=Network(52d15875-2a2e-463a-bc5d-8fa6b8466bff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1632735e-15') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  1 09:16:33 compute-0 nova_compute[189491]: 2025-12-01 09:16:33.324 189495 DEBUG ovsdbapp.backend.ovs_idl [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec  1 09:16:33 compute-0 nova_compute[189491]: 2025-12-01 09:16:33.325 189495 DEBUG ovsdbapp.backend.ovs_idl [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec  1 09:16:33 compute-0 nova_compute[189491]: 2025-12-01 09:16:33.325 189495 DEBUG ovsdbapp.backend.ovs_idl [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec  1 09:16:33 compute-0 nova_compute[189491]: 2025-12-01 09:16:33.326 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  1 09:16:33 compute-0 nova_compute[189491]: 2025-12-01 09:16:33.326 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [POLLOUT] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:16:33 compute-0 nova_compute[189491]: 2025-12-01 09:16:33.327 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  1 09:16:33 compute-0 nova_compute[189491]: 2025-12-01 09:16:33.328 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:16:33 compute-0 nova_compute[189491]: 2025-12-01 09:16:33.329 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:16:33 compute-0 nova_compute[189491]: 2025-12-01 09:16:33.332 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:16:33 compute-0 nova_compute[189491]: 2025-12-01 09:16:33.346 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:16:33 compute-0 nova_compute[189491]: 2025-12-01 09:16:33.347 189495 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:16:33 compute-0 nova_compute[189491]: 2025-12-01 09:16:33.347 189495 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 09:16:33 compute-0 nova_compute[189491]: 2025-12-01 09:16:33.348 189495 INFO oslo.privsep.daemon [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmp0bfmrzdn/privsep.sock']#033[00m
Dec  1 09:16:34 compute-0 nova_compute[189491]: 2025-12-01 09:16:34.050 189495 INFO oslo.privsep.daemon [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Spawned new privsep daemon via rootwrap#033[00m
Dec  1 09:16:34 compute-0 nova_compute[189491]: 2025-12-01 09:16:33.942 239737 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Dec  1 09:16:34 compute-0 nova_compute[189491]: 2025-12-01 09:16:33.946 239737 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Dec  1 09:16:34 compute-0 nova_compute[189491]: 2025-12-01 09:16:33.948 239737 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none#033[00m
Dec  1 09:16:34 compute-0 nova_compute[189491]: 2025-12-01 09:16:33.949 239737 INFO oslo.privsep.daemon [-] privsep daemon running as pid 239737#033[00m
Dec  1 09:16:34 compute-0 nova_compute[189491]: 2025-12-01 09:16:34.368 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:16:34 compute-0 nova_compute[189491]: 2025-12-01 09:16:34.369 189495 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1632735e-15, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:16:34 compute-0 nova_compute[189491]: 2025-12-01 09:16:34.371 189495 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1632735e-15, col_values=(('external_ids', {'iface-id': '1632735e-15c5-4d6b-a450-baa001b88ac2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d4:bd:b4', 'vm-uuid': '7ed22ffd-011d-48d7-962a-8606e471a59e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:16:34 compute-0 nova_compute[189491]: 2025-12-01 09:16:34.375 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:16:34 compute-0 NetworkManager[56318]: <info>  [1764580594.3761] manager: (tap1632735e-15): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Dec  1 09:16:34 compute-0 nova_compute[189491]: 2025-12-01 09:16:34.379 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  1 09:16:34 compute-0 nova_compute[189491]: 2025-12-01 09:16:34.389 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:16:34 compute-0 nova_compute[189491]: 2025-12-01 09:16:34.390 189495 INFO os_vif [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d4:bd:b4,bridge_name='br-int',has_traffic_filtering=True,id=1632735e-15c5-4d6b-a450-baa001b88ac2,network=Network(52d15875-2a2e-463a-bc5d-8fa6b8466bff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1632735e-15')#033[00m
Dec  1 09:16:34 compute-0 nova_compute[189491]: 2025-12-01 09:16:34.456 189495 DEBUG nova.virt.libvirt.driver [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  1 09:16:34 compute-0 nova_compute[189491]: 2025-12-01 09:16:34.457 189495 DEBUG nova.virt.libvirt.driver [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  1 09:16:34 compute-0 nova_compute[189491]: 2025-12-01 09:16:34.457 189495 DEBUG nova.virt.libvirt.driver [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  1 09:16:34 compute-0 nova_compute[189491]: 2025-12-01 09:16:34.457 189495 DEBUG nova.virt.libvirt.driver [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] No VIF found with MAC fa:16:3e:d4:bd:b4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  1 09:16:34 compute-0 nova_compute[189491]: 2025-12-01 09:16:34.458 189495 INFO nova.virt.libvirt.driver [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 7ed22ffd-011d-48d7-962a-8606e471a59e] Using config drive#033[00m
Dec  1 09:16:35 compute-0 nova_compute[189491]: 2025-12-01 09:16:35.114 189495 DEBUG nova.network.neutron [req-48634356-8183-4028-86a1-a7f95756c089 req-6cce7073-c9a8-4065-8384-c48df9d19119 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 7ed22ffd-011d-48d7-962a-8606e471a59e] Updated VIF entry in instance network info cache for port 1632735e-15c5-4d6b-a450-baa001b88ac2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  1 09:16:35 compute-0 nova_compute[189491]: 2025-12-01 09:16:35.115 189495 DEBUG nova.network.neutron [req-48634356-8183-4028-86a1-a7f95756c089 req-6cce7073-c9a8-4065-8384-c48df9d19119 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 7ed22ffd-011d-48d7-962a-8606e471a59e] Updating instance_info_cache with network_info: [{"id": "1632735e-15c5-4d6b-a450-baa001b88ac2", "address": "fa:16:3e:d4:bd:b4", "network": {"id": "52d15875-2a2e-463a-bc5d-8fa6b8466bff", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.55", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fac95b8a995a4174bfa966a8d9d9aa01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1632735e-15", "ovs_interfaceid": "1632735e-15c5-4d6b-a450-baa001b88ac2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 09:16:35 compute-0 nova_compute[189491]: 2025-12-01 09:16:35.139 189495 DEBUG oslo_concurrency.lockutils [req-48634356-8183-4028-86a1-a7f95756c089 req-6cce7073-c9a8-4065-8384-c48df9d19119 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Releasing lock "refresh_cache-7ed22ffd-011d-48d7-962a-8606e471a59e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 09:16:35 compute-0 nova_compute[189491]: 2025-12-01 09:16:35.264 189495 INFO nova.virt.libvirt.driver [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 7ed22ffd-011d-48d7-962a-8606e471a59e] Creating config drive at /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk.config#033[00m
Dec  1 09:16:35 compute-0 nova_compute[189491]: 2025-12-01 09:16:35.275 189495 DEBUG oslo_concurrency.processutils [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpj7y_x7xr execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:16:35 compute-0 nova_compute[189491]: 2025-12-01 09:16:35.404 189495 DEBUG oslo_concurrency.processutils [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpj7y_x7xr" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:16:35 compute-0 kernel: tun: Universal TUN/TAP device driver, 1.6
Dec  1 09:16:35 compute-0 NetworkManager[56318]: <info>  [1764580595.5381] manager: (tap1632735e-15): new Tun device (/org/freedesktop/NetworkManager/Devices/20)
Dec  1 09:16:35 compute-0 kernel: tap1632735e-15: entered promiscuous mode
Dec  1 09:16:35 compute-0 nova_compute[189491]: 2025-12-01 09:16:35.547 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:16:35 compute-0 ovn_controller[97794]: 2025-12-01T09:16:35Z|00027|binding|INFO|Claiming lport 1632735e-15c5-4d6b-a450-baa001b88ac2 for this chassis.
Dec  1 09:16:35 compute-0 ovn_controller[97794]: 2025-12-01T09:16:35Z|00028|binding|INFO|1632735e-15c5-4d6b-a450-baa001b88ac2: Claiming fa:16:3e:d4:bd:b4 192.168.0.55
Dec  1 09:16:35 compute-0 nova_compute[189491]: 2025-12-01 09:16:35.559 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:16:35 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:16:35.575 106659 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d4:bd:b4 192.168.0.55'], port_security=['fa:16:3e:d4:bd:b4 192.168.0.55'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '192.168.0.55/24', 'neutron:device_id': '7ed22ffd-011d-48d7-962a-8606e471a59e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-52d15875-2a2e-463a-bc5d-8fa6b8466bff', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fac95b8a995a4174bfa966a8d9d9aa01', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a5a5e6d4-6211-447f-b3f6-e2120ff69d87', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=260b7b6c-4405-41e2-9dc8-1595161adaf8, chassis=[<ovs.db.idl.Row object at 0x7ff7f12c3670>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff7f12c3670>], logical_port=1632735e-15c5-4d6b-a450-baa001b88ac2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 09:16:35 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:16:35.577 106659 INFO neutron.agent.ovn.metadata.agent [-] Port 1632735e-15c5-4d6b-a450-baa001b88ac2 in datapath 52d15875-2a2e-463a-bc5d-8fa6b8466bff bound to our chassis#033[00m
Dec  1 09:16:35 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:16:35.580 106659 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 52d15875-2a2e-463a-bc5d-8fa6b8466bff#033[00m
Dec  1 09:16:35 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:16:35.584 106659 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmpx_chxssi/privsep.sock']#033[00m
Dec  1 09:16:35 compute-0 systemd-udevd[239768]: Network interface NamePolicy= disabled on kernel command line.
Dec  1 09:16:35 compute-0 NetworkManager[56318]: <info>  [1764580595.6262] device (tap1632735e-15): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  1 09:16:35 compute-0 systemd-machined[155812]: New machine qemu-1-instance-00000001.
Dec  1 09:16:35 compute-0 NetworkManager[56318]: <info>  [1764580595.6362] device (tap1632735e-15): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  1 09:16:35 compute-0 systemd[1]: Started Virtual Machine qemu-1-instance-00000001.
Dec  1 09:16:35 compute-0 nova_compute[189491]: 2025-12-01 09:16:35.646 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:16:35 compute-0 ovn_controller[97794]: 2025-12-01T09:16:35Z|00029|binding|INFO|Setting lport 1632735e-15c5-4d6b-a450-baa001b88ac2 ovn-installed in OVS
Dec  1 09:16:35 compute-0 ovn_controller[97794]: 2025-12-01T09:16:35Z|00030|binding|INFO|Setting lport 1632735e-15c5-4d6b-a450-baa001b88ac2 up in Southbound
Dec  1 09:16:35 compute-0 nova_compute[189491]: 2025-12-01 09:16:35.653 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:16:35 compute-0 nova_compute[189491]: 2025-12-01 09:16:35.978 189495 DEBUG nova.virt.driver [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] Emitting event <LifecycleEvent: 1764580595.977829, 7ed22ffd-011d-48d7-962a-8606e471a59e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 09:16:35 compute-0 nova_compute[189491]: 2025-12-01 09:16:35.979 189495 INFO nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: 7ed22ffd-011d-48d7-962a-8606e471a59e] VM Started (Lifecycle Event)#033[00m
Dec  1 09:16:36 compute-0 nova_compute[189491]: 2025-12-01 09:16:36.039 189495 DEBUG nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: 7ed22ffd-011d-48d7-962a-8606e471a59e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 09:16:36 compute-0 nova_compute[189491]: 2025-12-01 09:16:36.047 189495 DEBUG nova.virt.driver [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] Emitting event <LifecycleEvent: 1764580595.978027, 7ed22ffd-011d-48d7-962a-8606e471a59e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 09:16:36 compute-0 nova_compute[189491]: 2025-12-01 09:16:36.047 189495 INFO nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: 7ed22ffd-011d-48d7-962a-8606e471a59e] VM Paused (Lifecycle Event)#033[00m
Dec  1 09:16:36 compute-0 nova_compute[189491]: 2025-12-01 09:16:36.171 189495 DEBUG nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: 7ed22ffd-011d-48d7-962a-8606e471a59e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 09:16:36 compute-0 systemd[1]: Starting libvirt proxy daemon...
Dec  1 09:16:36 compute-0 nova_compute[189491]: 2025-12-01 09:16:36.178 189495 DEBUG nova.compute.manager [req-07681262-1fa6-45a2-8223-b680e33e7746 req-e5b488aa-8e03-4807-85de-cd38eed282c6 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 7ed22ffd-011d-48d7-962a-8606e471a59e] Received event network-vif-plugged-1632735e-15c5-4d6b-a450-baa001b88ac2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 09:16:36 compute-0 nova_compute[189491]: 2025-12-01 09:16:36.178 189495 DEBUG oslo_concurrency.lockutils [req-07681262-1fa6-45a2-8223-b680e33e7746 req-e5b488aa-8e03-4807-85de-cd38eed282c6 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Acquiring lock "7ed22ffd-011d-48d7-962a-8606e471a59e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:16:36 compute-0 nova_compute[189491]: 2025-12-01 09:16:36.179 189495 DEBUG oslo_concurrency.lockutils [req-07681262-1fa6-45a2-8223-b680e33e7746 req-e5b488aa-8e03-4807-85de-cd38eed282c6 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Lock "7ed22ffd-011d-48d7-962a-8606e471a59e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:16:36 compute-0 nova_compute[189491]: 2025-12-01 09:16:36.180 189495 DEBUG oslo_concurrency.lockutils [req-07681262-1fa6-45a2-8223-b680e33e7746 req-e5b488aa-8e03-4807-85de-cd38eed282c6 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Lock "7ed22ffd-011d-48d7-962a-8606e471a59e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:16:36 compute-0 nova_compute[189491]: 2025-12-01 09:16:36.180 189495 DEBUG nova.compute.manager [req-07681262-1fa6-45a2-8223-b680e33e7746 req-e5b488aa-8e03-4807-85de-cd38eed282c6 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 7ed22ffd-011d-48d7-962a-8606e471a59e] Processing event network-vif-plugged-1632735e-15c5-4d6b-a450-baa001b88ac2 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  1 09:16:36 compute-0 nova_compute[189491]: 2025-12-01 09:16:36.182 189495 DEBUG nova.compute.manager [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 7ed22ffd-011d-48d7-962a-8606e471a59e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  1 09:16:36 compute-0 systemd[1]: Started libvirt proxy daemon.
Dec  1 09:16:36 compute-0 nova_compute[189491]: 2025-12-01 09:16:36.203 189495 DEBUG nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: 7ed22ffd-011d-48d7-962a-8606e471a59e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  1 09:16:36 compute-0 nova_compute[189491]: 2025-12-01 09:16:36.208 189495 DEBUG nova.virt.libvirt.driver [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 7ed22ffd-011d-48d7-962a-8606e471a59e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  1 09:16:36 compute-0 nova_compute[189491]: 2025-12-01 09:16:36.230 189495 INFO nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: 7ed22ffd-011d-48d7-962a-8606e471a59e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  1 09:16:36 compute-0 nova_compute[189491]: 2025-12-01 09:16:36.230 189495 DEBUG nova.virt.driver [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] Emitting event <LifecycleEvent: 1764580596.2093399, 7ed22ffd-011d-48d7-962a-8606e471a59e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 09:16:36 compute-0 nova_compute[189491]: 2025-12-01 09:16:36.230 189495 INFO nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: 7ed22ffd-011d-48d7-962a-8606e471a59e] VM Resumed (Lifecycle Event)#033[00m
Dec  1 09:16:36 compute-0 nova_compute[189491]: 2025-12-01 09:16:36.237 189495 INFO nova.virt.libvirt.driver [-] [instance: 7ed22ffd-011d-48d7-962a-8606e471a59e] Instance spawned successfully.#033[00m
Dec  1 09:16:36 compute-0 nova_compute[189491]: 2025-12-01 09:16:36.238 189495 DEBUG nova.virt.libvirt.driver [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 7ed22ffd-011d-48d7-962a-8606e471a59e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  1 09:16:36 compute-0 nova_compute[189491]: 2025-12-01 09:16:36.266 189495 DEBUG nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: 7ed22ffd-011d-48d7-962a-8606e471a59e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 09:16:36 compute-0 nova_compute[189491]: 2025-12-01 09:16:36.272 189495 DEBUG nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: 7ed22ffd-011d-48d7-962a-8606e471a59e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  1 09:16:36 compute-0 nova_compute[189491]: 2025-12-01 09:16:36.312 189495 INFO nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: 7ed22ffd-011d-48d7-962a-8606e471a59e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  1 09:16:36 compute-0 nova_compute[189491]: 2025-12-01 09:16:36.325 189495 DEBUG nova.virt.libvirt.driver [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 7ed22ffd-011d-48d7-962a-8606e471a59e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 09:16:36 compute-0 nova_compute[189491]: 2025-12-01 09:16:36.325 189495 DEBUG nova.virt.libvirt.driver [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 7ed22ffd-011d-48d7-962a-8606e471a59e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 09:16:36 compute-0 nova_compute[189491]: 2025-12-01 09:16:36.326 189495 DEBUG nova.virt.libvirt.driver [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 7ed22ffd-011d-48d7-962a-8606e471a59e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 09:16:36 compute-0 nova_compute[189491]: 2025-12-01 09:16:36.327 189495 DEBUG nova.virt.libvirt.driver [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 7ed22ffd-011d-48d7-962a-8606e471a59e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 09:16:36 compute-0 nova_compute[189491]: 2025-12-01 09:16:36.328 189495 DEBUG nova.virt.libvirt.driver [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 7ed22ffd-011d-48d7-962a-8606e471a59e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 09:16:36 compute-0 nova_compute[189491]: 2025-12-01 09:16:36.329 189495 DEBUG nova.virt.libvirt.driver [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 7ed22ffd-011d-48d7-962a-8606e471a59e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 09:16:36 compute-0 podman[239790]: 2025-12-01 09:16:36.334404794 +0000 UTC m=+0.127152022 container health_status e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Dec  1 09:16:36 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:16:36.382 106659 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Dec  1 09:16:36 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:16:36.383 106659 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpx_chxssi/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Dec  1 09:16:36 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:16:36.255 239818 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Dec  1 09:16:36 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:16:36.263 239818 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Dec  1 09:16:36 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:16:36.267 239818 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none#033[00m
Dec  1 09:16:36 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:16:36.268 239818 INFO oslo.privsep.daemon [-] privsep daemon running as pid 239818#033[00m
Dec  1 09:16:36 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:16:36.388 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[11a2eaf7-347b-4165-a4a8-b48453c8de84]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:16:36 compute-0 nova_compute[189491]: 2025-12-01 09:16:36.389 189495 INFO nova.compute.manager [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 7ed22ffd-011d-48d7-962a-8606e471a59e] Took 8.88 seconds to spawn the instance on the hypervisor.#033[00m
Dec  1 09:16:36 compute-0 nova_compute[189491]: 2025-12-01 09:16:36.390 189495 DEBUG nova.compute.manager [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 7ed22ffd-011d-48d7-962a-8606e471a59e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 09:16:36 compute-0 nova_compute[189491]: 2025-12-01 09:16:36.459 189495 INFO nova.compute.manager [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 7ed22ffd-011d-48d7-962a-8606e471a59e] Took 9.40 seconds to build instance.#033[00m
Dec  1 09:16:36 compute-0 nova_compute[189491]: 2025-12-01 09:16:36.476 189495 DEBUG oslo_concurrency.lockutils [None req-456cafe4-9fc4-419c-bebc-5b202570e627 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lock "7ed22ffd-011d-48d7-962a-8606e471a59e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.509s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:16:36 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:16:36.920 239818 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:16:36 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:16:36.920 239818 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:16:36 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:16:36.920 239818 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:16:37 compute-0 nova_compute[189491]: 2025-12-01 09:16:37.081 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:16:37 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:16:37.505 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[dec307c7-7210-476d-9a4f-3bb0c433bc1c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:16:37 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:16:37.506 106659 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap52d15875-21 in ovnmeta-52d15875-2a2e-463a-bc5d-8fa6b8466bff namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  1 09:16:37 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:16:37.508 239818 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap52d15875-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  1 09:16:37 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:16:37.509 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[72d8ba10-9e04-4b8f-a477-8b843d4c6a59]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:16:37 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:16:37.512 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[04942c9d-bea1-42ea-bc7b-a9849f322938]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:16:37 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:16:37.546 106797 DEBUG oslo.privsep.daemon [-] privsep: reply[9d3487ee-3edc-4735-8e0c-09340ea02b65]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:16:37 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:16:37.583 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[b546e680-8395-42b8-b1c1-48e26d564cdc]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:16:37 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:16:37.585 106659 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmpi520for1/privsep.sock']#033[00m
Dec  1 09:16:38 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:16:38.302 106659 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Dec  1 09:16:38 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:16:38.303 106659 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpi520for1/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Dec  1 09:16:38 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:16:38.171 239843 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Dec  1 09:16:38 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:16:38.176 239843 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Dec  1 09:16:38 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:16:38.178 239843 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m
Dec  1 09:16:38 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:16:38.178 239843 INFO oslo.privsep.daemon [-] privsep daemon running as pid 239843#033[00m
Dec  1 09:16:38 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:16:38.306 239843 DEBUG oslo.privsep.daemon [-] privsep: reply[7b52f89e-48e0-468a-98de-5a681c4a5a70]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:16:38 compute-0 nova_compute[189491]: 2025-12-01 09:16:38.329 189495 DEBUG nova.compute.manager [req-6403b7b8-cc3b-4f89-ba49-c9cbbb6efdfd req-6a03c8ee-f2d2-4e0b-8712-df26efa77834 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 7ed22ffd-011d-48d7-962a-8606e471a59e] Received event network-vif-plugged-1632735e-15c5-4d6b-a450-baa001b88ac2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 09:16:38 compute-0 nova_compute[189491]: 2025-12-01 09:16:38.329 189495 DEBUG oslo_concurrency.lockutils [req-6403b7b8-cc3b-4f89-ba49-c9cbbb6efdfd req-6a03c8ee-f2d2-4e0b-8712-df26efa77834 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Acquiring lock "7ed22ffd-011d-48d7-962a-8606e471a59e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:16:38 compute-0 nova_compute[189491]: 2025-12-01 09:16:38.330 189495 DEBUG oslo_concurrency.lockutils [req-6403b7b8-cc3b-4f89-ba49-c9cbbb6efdfd req-6a03c8ee-f2d2-4e0b-8712-df26efa77834 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Lock "7ed22ffd-011d-48d7-962a-8606e471a59e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:16:38 compute-0 nova_compute[189491]: 2025-12-01 09:16:38.331 189495 DEBUG oslo_concurrency.lockutils [req-6403b7b8-cc3b-4f89-ba49-c9cbbb6efdfd req-6a03c8ee-f2d2-4e0b-8712-df26efa77834 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Lock "7ed22ffd-011d-48d7-962a-8606e471a59e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:16:38 compute-0 nova_compute[189491]: 2025-12-01 09:16:38.331 189495 DEBUG nova.compute.manager [req-6403b7b8-cc3b-4f89-ba49-c9cbbb6efdfd req-6a03c8ee-f2d2-4e0b-8712-df26efa77834 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 7ed22ffd-011d-48d7-962a-8606e471a59e] No waiting events found dispatching network-vif-plugged-1632735e-15c5-4d6b-a450-baa001b88ac2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 09:16:38 compute-0 nova_compute[189491]: 2025-12-01 09:16:38.332 189495 WARNING nova.compute.manager [req-6403b7b8-cc3b-4f89-ba49-c9cbbb6efdfd req-6a03c8ee-f2d2-4e0b-8712-df26efa77834 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 7ed22ffd-011d-48d7-962a-8606e471a59e] Received unexpected event network-vif-plugged-1632735e-15c5-4d6b-a450-baa001b88ac2 for instance with vm_state active and task_state None.#033[00m
Dec  1 09:16:38 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:16:38.853 239843 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:16:38 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:16:38.853 239843 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:16:38 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:16:38.853 239843 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:16:39 compute-0 nova_compute[189491]: 2025-12-01 09:16:39.378 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:16:39 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:16:39.442 239843 DEBUG oslo.privsep.daemon [-] privsep: reply[eeb8d027-5cf7-437f-a306-1584b95a9b47]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:16:39 compute-0 NetworkManager[56318]: <info>  [1764580599.4922] manager: (tap52d15875-20): new Veth device (/org/freedesktop/NetworkManager/Devices/21)
Dec  1 09:16:39 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:16:39.488 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[505eb91a-bd59-4169-bf20-9338d63882ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:16:39 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:16:39.548 239843 DEBUG oslo.privsep.daemon [-] privsep: reply[13aff138-3b03-4dcb-a24a-199b520b7d9d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:16:39 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:16:39.554 239843 DEBUG oslo.privsep.daemon [-] privsep: reply[8f467add-6dd0-4350-9e7b-019062127041]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:16:39 compute-0 systemd-udevd[239870]: Network interface NamePolicy= disabled on kernel command line.
Dec  1 09:16:39 compute-0 NetworkManager[56318]: <info>  [1764580599.5909] device (tap52d15875-20): carrier: link connected
Dec  1 09:16:39 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:16:39.599 239843 DEBUG oslo.privsep.daemon [-] privsep: reply[fede79f8-5ff3-47fb-8eb3-c9a1178bd95c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:16:39 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:16:39.619 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[fe0006c8-1860-4aea-9931-0ba415187dcc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap52d15875-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d0:8c:a9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 383928, 'reachable_time': 35285, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 239897, 'error': None, 'target': 'ovnmeta-52d15875-2a2e-463a-bc5d-8fa6b8466bff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:16:39 compute-0 podman[239852]: 2025-12-01 09:16:39.628293275 +0000 UTC m=+0.115420908 container health_status dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  1 09:16:39 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:16:39.640 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[43337748-3244-40aa-a662-d62d02f8b764]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed0:8ca9'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 383928, 'tstamp': 383928}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 239909, 'error': None, 'target': 'ovnmeta-52d15875-2a2e-463a-bc5d-8fa6b8466bff', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:16:39 compute-0 podman[239853]: 2025-12-01 09:16:39.65747594 +0000 UTC m=+0.128218088 container health_status f2d8639519b361376780abb83cb5a5e341c4f412720fb5852b2a4d261a69c359 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., vcs-type=git, container_name=kepler, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.expose-services=, release-0.7.12=, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, architecture=x86_64, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=edpm)
Dec  1 09:16:39 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:16:39.671 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[8231bae9-f7ef-49a3-9761-d5f89e0d2e66]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap52d15875-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d0:8c:a9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 383928, 'reachable_time': 35285, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 239913, 'error': None, 'target': 'ovnmeta-52d15875-2a2e-463a-bc5d-8fa6b8466bff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:16:39 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:16:39.708 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[c6f6af9e-4cb9-4f85-80ea-453d9efd51ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:16:39 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:16:39.769 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[d3179ffe-f47d-41af-8793-578c4ef30325]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:16:39 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:16:39.771 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap52d15875-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:16:39 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:16:39.772 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 09:16:39 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:16:39.772 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap52d15875-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:16:39 compute-0 nova_compute[189491]: 2025-12-01 09:16:39.775 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:16:39 compute-0 kernel: tap52d15875-20: entered promiscuous mode
Dec  1 09:16:39 compute-0 NetworkManager[56318]: <info>  [1764580599.7769] manager: (tap52d15875-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/22)
Dec  1 09:16:39 compute-0 nova_compute[189491]: 2025-12-01 09:16:39.781 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:16:39 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:16:39.782 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap52d15875-20, col_values=(('external_ids', {'iface-id': 'dbcd2eb8-9722-4ebb-b254-d57f599617d1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:16:39 compute-0 nova_compute[189491]: 2025-12-01 09:16:39.783 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:16:39 compute-0 ovn_controller[97794]: 2025-12-01T09:16:39Z|00031|binding|INFO|Releasing lport dbcd2eb8-9722-4ebb-b254-d57f599617d1 from this chassis (sb_readonly=0)
Dec  1 09:16:39 compute-0 nova_compute[189491]: 2025-12-01 09:16:39.801 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:16:39 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:16:39.802 106659 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/52d15875-2a2e-463a-bc5d-8fa6b8466bff.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/52d15875-2a2e-463a-bc5d-8fa6b8466bff.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  1 09:16:39 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:16:39.803 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[faac76f6-8c3a-4f97-a3a8-99fae053cfd9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:16:39 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:16:39.804 106659 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  1 09:16:39 compute-0 ovn_metadata_agent[106654]: global
Dec  1 09:16:39 compute-0 ovn_metadata_agent[106654]:    log         /dev/log local0 debug
Dec  1 09:16:39 compute-0 ovn_metadata_agent[106654]:    log-tag     haproxy-metadata-proxy-52d15875-2a2e-463a-bc5d-8fa6b8466bff
Dec  1 09:16:39 compute-0 ovn_metadata_agent[106654]:    user        root
Dec  1 09:16:39 compute-0 ovn_metadata_agent[106654]:    group       root
Dec  1 09:16:39 compute-0 ovn_metadata_agent[106654]:    maxconn     1024
Dec  1 09:16:39 compute-0 ovn_metadata_agent[106654]:    pidfile     /var/lib/neutron/external/pids/52d15875-2a2e-463a-bc5d-8fa6b8466bff.pid.haproxy
Dec  1 09:16:39 compute-0 ovn_metadata_agent[106654]:    daemon
Dec  1 09:16:39 compute-0 ovn_metadata_agent[106654]: 
Dec  1 09:16:39 compute-0 ovn_metadata_agent[106654]: defaults
Dec  1 09:16:39 compute-0 ovn_metadata_agent[106654]:    log global
Dec  1 09:16:39 compute-0 ovn_metadata_agent[106654]:    mode http
Dec  1 09:16:39 compute-0 ovn_metadata_agent[106654]:    option httplog
Dec  1 09:16:39 compute-0 ovn_metadata_agent[106654]:    option dontlognull
Dec  1 09:16:39 compute-0 ovn_metadata_agent[106654]:    option http-server-close
Dec  1 09:16:39 compute-0 ovn_metadata_agent[106654]:    option forwardfor
Dec  1 09:16:39 compute-0 ovn_metadata_agent[106654]:    retries                 3
Dec  1 09:16:39 compute-0 ovn_metadata_agent[106654]:    timeout http-request    30s
Dec  1 09:16:39 compute-0 ovn_metadata_agent[106654]:    timeout connect         30s
Dec  1 09:16:39 compute-0 ovn_metadata_agent[106654]:    timeout client          32s
Dec  1 09:16:39 compute-0 ovn_metadata_agent[106654]:    timeout server          32s
Dec  1 09:16:39 compute-0 ovn_metadata_agent[106654]:    timeout http-keep-alive 30s
Dec  1 09:16:39 compute-0 ovn_metadata_agent[106654]: 
Dec  1 09:16:39 compute-0 ovn_metadata_agent[106654]: 
Dec  1 09:16:39 compute-0 ovn_metadata_agent[106654]: listen listener
Dec  1 09:16:39 compute-0 ovn_metadata_agent[106654]:    bind 169.254.169.254:80
Dec  1 09:16:39 compute-0 ovn_metadata_agent[106654]:    server metadata /var/lib/neutron/metadata_proxy
Dec  1 09:16:39 compute-0 ovn_metadata_agent[106654]:    http-request add-header X-OVN-Network-ID 52d15875-2a2e-463a-bc5d-8fa6b8466bff
Dec  1 09:16:39 compute-0 ovn_metadata_agent[106654]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  1 09:16:39 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:16:39.805 106659 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-52d15875-2a2e-463a-bc5d-8fa6b8466bff', 'env', 'PROCESS_TAG=haproxy-52d15875-2a2e-463a-bc5d-8fa6b8466bff', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/52d15875-2a2e-463a-bc5d-8fa6b8466bff.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  1 09:16:40 compute-0 podman[239944]: 2025-12-01 09:16:40.257831077 +0000 UTC m=+0.068744421 container create 2f80b03765e40a4815a093c75ababa2ab21375fe8521715fb03f7313d6b1afa5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-52d15875-2a2e-463a-bc5d-8fa6b8466bff, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec  1 09:16:40 compute-0 systemd[1]: Started libpod-conmon-2f80b03765e40a4815a093c75ababa2ab21375fe8521715fb03f7313d6b1afa5.scope.
Dec  1 09:16:40 compute-0 podman[239944]: 2025-12-01 09:16:40.228630262 +0000 UTC m=+0.039543626 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  1 09:16:40 compute-0 systemd[1]: Started libcrun container.
Dec  1 09:16:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3334922e409c57716a880baf1b1202bda6449b513322f5e2d0b0edc6459fb31e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  1 09:16:40 compute-0 podman[239944]: 2025-12-01 09:16:40.390634775 +0000 UTC m=+0.201548129 container init 2f80b03765e40a4815a093c75ababa2ab21375fe8521715fb03f7313d6b1afa5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-52d15875-2a2e-463a-bc5d-8fa6b8466bff, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec  1 09:16:40 compute-0 podman[239944]: 2025-12-01 09:16:40.398300169 +0000 UTC m=+0.209213513 container start 2f80b03765e40a4815a093c75ababa2ab21375fe8521715fb03f7313d6b1afa5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-52d15875-2a2e-463a-bc5d-8fa6b8466bff, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec  1 09:16:40 compute-0 neutron-haproxy-ovnmeta-52d15875-2a2e-463a-bc5d-8fa6b8466bff[239959]: [NOTICE]   (239963) : New worker (239965) forked
Dec  1 09:16:40 compute-0 neutron-haproxy-ovnmeta-52d15875-2a2e-463a-bc5d-8fa6b8466bff[239959]: [NOTICE]   (239963) : Loading success.
Dec  1 09:16:42 compute-0 nova_compute[189491]: 2025-12-01 09:16:42.084 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:16:44 compute-0 nova_compute[189491]: 2025-12-01 09:16:44.384 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:16:45 compute-0 podman[239977]: 2025-12-01 09:16:45.749772598 +0000 UTC m=+0.108165822 container health_status f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec  1 09:16:45 compute-0 podman[239976]: 2025-12-01 09:16:45.794735254 +0000 UTC m=+0.153135838 container health_status 110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, architecture=x86_64, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, version=9.6, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, container_name=openstack_network_exporter, managed_by=edpm_ansible, release=1755695350, io.openshift.expose-services=)
Dec  1 09:16:47 compute-0 nova_compute[189491]: 2025-12-01 09:16:47.086 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:16:48 compute-0 podman[240017]: 2025-12-01 09:16:48.786795037 +0000 UTC m=+0.140368690 container health_status 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  1 09:16:48 compute-0 podman[240018]: 2025-12-01 09:16:48.816045324 +0000 UTC m=+0.162899124 container health_status 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec  1 09:16:48 compute-0 nova_compute[189491]: 2025-12-01 09:16:48.874 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:16:48 compute-0 ovn_controller[97794]: 2025-12-01T09:16:48Z|00032|binding|INFO|Releasing lport dbcd2eb8-9722-4ebb-b254-d57f599617d1 from this chassis (sb_readonly=0)
Dec  1 09:16:48 compute-0 NetworkManager[56318]: <info>  [1764580608.8775] manager: (patch-br-int-to-provnet-67977a6b-d92d-45ee-82d4-e7c8569d3129): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/23)
Dec  1 09:16:48 compute-0 NetworkManager[56318]: <info>  [1764580608.8844] device (patch-br-int-to-provnet-67977a6b-d92d-45ee-82d4-e7c8569d3129)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  1 09:16:48 compute-0 NetworkManager[56318]: <info>  [1764580608.8972] manager: (patch-provnet-67977a6b-d92d-45ee-82d4-e7c8569d3129-to-br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/24)
Dec  1 09:16:48 compute-0 NetworkManager[56318]: <info>  [1764580608.9032] device (patch-provnet-67977a6b-d92d-45ee-82d4-e7c8569d3129-to-br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  1 09:16:48 compute-0 ovn_controller[97794]: 2025-12-01T09:16:48Z|00033|binding|INFO|Releasing lport dbcd2eb8-9722-4ebb-b254-d57f599617d1 from this chassis (sb_readonly=0)
Dec  1 09:16:48 compute-0 nova_compute[189491]: 2025-12-01 09:16:48.905 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:16:48 compute-0 NetworkManager[56318]: <info>  [1764580608.9130] manager: (patch-provnet-67977a6b-d92d-45ee-82d4-e7c8569d3129-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/25)
Dec  1 09:16:48 compute-0 nova_compute[189491]: 2025-12-01 09:16:48.913 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:16:48 compute-0 NetworkManager[56318]: <info>  [1764580608.9183] manager: (patch-br-int-to-provnet-67977a6b-d92d-45ee-82d4-e7c8569d3129): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/26)
Dec  1 09:16:48 compute-0 NetworkManager[56318]: <info>  [1764580608.9227] device (patch-br-int-to-provnet-67977a6b-d92d-45ee-82d4-e7c8569d3129)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Dec  1 09:16:48 compute-0 NetworkManager[56318]: <info>  [1764580608.9265] device (patch-provnet-67977a6b-d92d-45ee-82d4-e7c8569d3129-to-br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Dec  1 09:16:49 compute-0 nova_compute[189491]: 2025-12-01 09:16:49.140 189495 DEBUG nova.compute.manager [req-7e891bae-8201-4c78-a9a8-789e946b710a req-42e3d9fc-ca0b-4ef4-aa7e-97afc377b800 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 7ed22ffd-011d-48d7-962a-8606e471a59e] Received event network-changed-1632735e-15c5-4d6b-a450-baa001b88ac2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 09:16:49 compute-0 nova_compute[189491]: 2025-12-01 09:16:49.140 189495 DEBUG nova.compute.manager [req-7e891bae-8201-4c78-a9a8-789e946b710a req-42e3d9fc-ca0b-4ef4-aa7e-97afc377b800 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 7ed22ffd-011d-48d7-962a-8606e471a59e] Refreshing instance network info cache due to event network-changed-1632735e-15c5-4d6b-a450-baa001b88ac2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  1 09:16:49 compute-0 nova_compute[189491]: 2025-12-01 09:16:49.140 189495 DEBUG oslo_concurrency.lockutils [req-7e891bae-8201-4c78-a9a8-789e946b710a req-42e3d9fc-ca0b-4ef4-aa7e-97afc377b800 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Acquiring lock "refresh_cache-7ed22ffd-011d-48d7-962a-8606e471a59e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 09:16:49 compute-0 nova_compute[189491]: 2025-12-01 09:16:49.141 189495 DEBUG oslo_concurrency.lockutils [req-7e891bae-8201-4c78-a9a8-789e946b710a req-42e3d9fc-ca0b-4ef4-aa7e-97afc377b800 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Acquired lock "refresh_cache-7ed22ffd-011d-48d7-962a-8606e471a59e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 09:16:49 compute-0 nova_compute[189491]: 2025-12-01 09:16:49.141 189495 DEBUG nova.network.neutron [req-7e891bae-8201-4c78-a9a8-789e946b710a req-42e3d9fc-ca0b-4ef4-aa7e-97afc377b800 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 7ed22ffd-011d-48d7-962a-8606e471a59e] Refreshing network info cache for port 1632735e-15c5-4d6b-a450-baa001b88ac2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  1 09:16:49 compute-0 nova_compute[189491]: 2025-12-01 09:16:49.387 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:16:52 compute-0 nova_compute[189491]: 2025-12-01 09:16:52.045 189495 DEBUG nova.network.neutron [req-7e891bae-8201-4c78-a9a8-789e946b710a req-42e3d9fc-ca0b-4ef4-aa7e-97afc377b800 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 7ed22ffd-011d-48d7-962a-8606e471a59e] Updated VIF entry in instance network info cache for port 1632735e-15c5-4d6b-a450-baa001b88ac2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  1 09:16:52 compute-0 nova_compute[189491]: 2025-12-01 09:16:52.048 189495 DEBUG nova.network.neutron [req-7e891bae-8201-4c78-a9a8-789e946b710a req-42e3d9fc-ca0b-4ef4-aa7e-97afc377b800 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 7ed22ffd-011d-48d7-962a-8606e471a59e] Updating instance_info_cache with network_info: [{"id": "1632735e-15c5-4d6b-a450-baa001b88ac2", "address": "fa:16:3e:d4:bd:b4", "network": {"id": "52d15875-2a2e-463a-bc5d-8fa6b8466bff", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.55", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fac95b8a995a4174bfa966a8d9d9aa01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1632735e-15", "ovs_interfaceid": "1632735e-15c5-4d6b-a450-baa001b88ac2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 09:16:52 compute-0 nova_compute[189491]: 2025-12-01 09:16:52.072 189495 DEBUG oslo_concurrency.lockutils [req-7e891bae-8201-4c78-a9a8-789e946b710a req-42e3d9fc-ca0b-4ef4-aa7e-97afc377b800 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Releasing lock "refresh_cache-7ed22ffd-011d-48d7-962a-8606e471a59e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 09:16:52 compute-0 nova_compute[189491]: 2025-12-01 09:16:52.088 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:16:54 compute-0 nova_compute[189491]: 2025-12-01 09:16:54.392 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:16:57 compute-0 nova_compute[189491]: 2025-12-01 09:16:57.092 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:16:59 compute-0 nova_compute[189491]: 2025-12-01 09:16:59.396 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:16:59 compute-0 podman[203700]: time="2025-12-01T09:16:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 09:16:59 compute-0 podman[203700]: @ - - [01/Dec/2025:09:16:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Dec  1 09:16:59 compute-0 podman[203700]: @ - - [01/Dec/2025:09:16:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4790 "" "Go-http-client/1.1"
Dec  1 09:17:00 compute-0 podman[240061]: 2025-12-01 09:17:00.721179953 +0000 UTC m=+0.096967892 container health_status 6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  1 09:17:00 compute-0 podman[240062]: 2025-12-01 09:17:00.738487231 +0000 UTC m=+0.095191520 container health_status ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, config_id=edpm, container_name=ceilometer_agent_compute, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible)
Dec  1 09:17:01 compute-0 openstack_network_exporter[205866]: ERROR   09:17:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 09:17:01 compute-0 openstack_network_exporter[205866]: ERROR   09:17:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:17:01 compute-0 openstack_network_exporter[205866]: ERROR   09:17:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:17:01 compute-0 openstack_network_exporter[205866]: ERROR   09:17:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 09:17:01 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:17:01 compute-0 openstack_network_exporter[205866]: ERROR   09:17:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 09:17:01 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:17:02 compute-0 nova_compute[189491]: 2025-12-01 09:17:02.097 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:17:04 compute-0 ovn_controller[97794]: 2025-12-01T09:17:04Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d4:bd:b4 192.168.0.55
Dec  1 09:17:04 compute-0 ovn_controller[97794]: 2025-12-01T09:17:04Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d4:bd:b4 192.168.0.55
Dec  1 09:17:04 compute-0 nova_compute[189491]: 2025-12-01 09:17:04.402 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:17:06 compute-0 podman[240110]: 2025-12-01 09:17:06.749283122 +0000 UTC m=+0.115402297 container health_status e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec  1 09:17:07 compute-0 nova_compute[189491]: 2025-12-01 09:17:07.107 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:17:09 compute-0 nova_compute[189491]: 2025-12-01 09:17:09.410 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:17:10 compute-0 nova_compute[189491]: 2025-12-01 09:17:10.715 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:17:10 compute-0 nova_compute[189491]: 2025-12-01 09:17:10.715 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  1 09:17:10 compute-0 podman[240130]: 2025-12-01 09:17:10.731704482 +0000 UTC m=+0.091501872 container health_status dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  1 09:17:10 compute-0 podman[240131]: 2025-12-01 09:17:10.759916223 +0000 UTC m=+0.116729401 container health_status f2d8639519b361376780abb83cb5a5e341c4f412720fb5852b2a4d261a69c359 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, distribution-scope=public, release=1214.1726694543, container_name=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, config_id=edpm, managed_by=edpm_ansible, release-0.7.12=, vcs-type=git, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0)
Dec  1 09:17:12 compute-0 nova_compute[189491]: 2025-12-01 09:17:12.109 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:17:14 compute-0 nova_compute[189491]: 2025-12-01 09:17:14.414 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:17:14 compute-0 nova_compute[189491]: 2025-12-01 09:17:14.714 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:17:14 compute-0 nova_compute[189491]: 2025-12-01 09:17:14.715 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:17:15 compute-0 nova_compute[189491]: 2025-12-01 09:17:15.743 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:17:15 compute-0 nova_compute[189491]: 2025-12-01 09:17:15.785 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:17:15 compute-0 nova_compute[189491]: 2025-12-01 09:17:15.785 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:17:15 compute-0 nova_compute[189491]: 2025-12-01 09:17:15.786 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:17:15 compute-0 nova_compute[189491]: 2025-12-01 09:17:15.786 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 09:17:15 compute-0 podman[240176]: 2025-12-01 09:17:15.910152642 +0000 UTC m=+0.065899002 container health_status f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec  1 09:17:15 compute-0 podman[240175]: 2025-12-01 09:17:15.953225902 +0000 UTC m=+0.111501214 container health_status 110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, config_id=edpm, vendor=Red Hat, Inc., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git)
Dec  1 09:17:16 compute-0 nova_compute[189491]: 2025-12-01 09:17:16.071 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:17:16 compute-0 nova_compute[189491]: 2025-12-01 09:17:16.140 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:17:16 compute-0 nova_compute[189491]: 2025-12-01 09:17:16.141 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:17:16 compute-0 nova_compute[189491]: 2025-12-01 09:17:16.219 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:17:16 compute-0 nova_compute[189491]: 2025-12-01 09:17:16.220 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:17:16 compute-0 nova_compute[189491]: 2025-12-01 09:17:16.293 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk.eph0 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:17:16 compute-0 nova_compute[189491]: 2025-12-01 09:17:16.294 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:17:16 compute-0 nova_compute[189491]: 2025-12-01 09:17:16.375 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk.eph0 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:17:16 compute-0 nova_compute[189491]: 2025-12-01 09:17:16.713 189495 WARNING nova.virt.libvirt.driver [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 09:17:16 compute-0 nova_compute[189491]: 2025-12-01 09:17:16.716 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5240MB free_disk=72.38882827758789GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 09:17:16 compute-0 nova_compute[189491]: 2025-12-01 09:17:16.717 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:17:16 compute-0 nova_compute[189491]: 2025-12-01 09:17:16.717 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:17:17 compute-0 nova_compute[189491]: 2025-12-01 09:17:17.111 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:17:17 compute-0 nova_compute[189491]: 2025-12-01 09:17:17.235 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Instance 7ed22ffd-011d-48d7-962a-8606e471a59e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 09:17:17 compute-0 nova_compute[189491]: 2025-12-01 09:17:17.236 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 09:17:17 compute-0 nova_compute[189491]: 2025-12-01 09:17:17.236 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1024MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 09:17:17 compute-0 nova_compute[189491]: 2025-12-01 09:17:17.280 189495 DEBUG nova.compute.provider_tree [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Updating inventory in ProviderTree for provider 143c7fe7-af1f-477a-978c-6a994d785d98 with inventory: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  1 09:17:17 compute-0 nova_compute[189491]: 2025-12-01 09:17:17.316 189495 ERROR nova.scheduler.client.report [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] [req-1057b099-2d08-4daa-a0d9-9bfb47809d90] Failed to update inventory to [{'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}}] for resource provider with UUID 143c7fe7-af1f-477a-978c-6a994d785d98.  Got 409: {"errors": [{"status": 409, "title": "Conflict", "detail": "There was a conflict when trying to complete your request.\n\n resource provider generation conflict  ", "code": "placement.concurrent_update", "request_id": "req-1057b099-2d08-4daa-a0d9-9bfb47809d90"}]}#033[00m
Dec  1 09:17:17 compute-0 nova_compute[189491]: 2025-12-01 09:17:17.334 189495 DEBUG nova.scheduler.client.report [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Refreshing inventories for resource provider 143c7fe7-af1f-477a-978c-6a994d785d98 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  1 09:17:17 compute-0 nova_compute[189491]: 2025-12-01 09:17:17.355 189495 DEBUG nova.scheduler.client.report [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Updating ProviderTree inventory for provider 143c7fe7-af1f-477a-978c-6a994d785d98 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  1 09:17:17 compute-0 nova_compute[189491]: 2025-12-01 09:17:17.356 189495 DEBUG nova.compute.provider_tree [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Updating inventory in ProviderTree for provider 143c7fe7-af1f-477a-978c-6a994d785d98 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  1 09:17:17 compute-0 nova_compute[189491]: 2025-12-01 09:17:17.375 189495 DEBUG nova.scheduler.client.report [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Refreshing aggregate associations for resource provider 143c7fe7-af1f-477a-978c-6a994d785d98, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  1 09:17:17 compute-0 nova_compute[189491]: 2025-12-01 09:17:17.398 189495 DEBUG nova.scheduler.client.report [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Refreshing trait associations for resource provider 143c7fe7-af1f-477a-978c-6a994d785d98, traits: COMPUTE_STORAGE_BUS_FDC,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_FMA3,HW_CPU_X86_SVM,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_AVX,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SHA,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_AVX2,HW_CPU_X86_ABM,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_USB,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_MMX,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE2,COMPUTE_ACCELERATORS,HW_CPU_X86_F16C,HW_CPU_X86_SSE42,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE41,HW_CPU_X86_BMI2,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE4A,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_CIRRUS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  1 09:17:17 compute-0 nova_compute[189491]: 2025-12-01 09:17:17.437 189495 DEBUG nova.compute.provider_tree [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Updating inventory in ProviderTree for provider 143c7fe7-af1f-477a-978c-6a994d785d98 with inventory: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  1 09:17:17 compute-0 nova_compute[189491]: 2025-12-01 09:17:17.484 189495 DEBUG nova.scheduler.client.report [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Updated inventory for provider 143c7fe7-af1f-477a-978c-6a994d785d98 with generation 3 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m
Dec  1 09:17:17 compute-0 nova_compute[189491]: 2025-12-01 09:17:17.485 189495 DEBUG nova.compute.provider_tree [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Updating resource provider 143c7fe7-af1f-477a-978c-6a994d785d98 generation from 3 to 4 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Dec  1 09:17:17 compute-0 nova_compute[189491]: 2025-12-01 09:17:17.485 189495 DEBUG nova.compute.provider_tree [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Updating inventory in ProviderTree for provider 143c7fe7-af1f-477a-978c-6a994d785d98 with inventory: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  1 09:17:17 compute-0 nova_compute[189491]: 2025-12-01 09:17:17.512 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 09:17:17 compute-0 nova_compute[189491]: 2025-12-01 09:17:17.512 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.795s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:17:18 compute-0 nova_compute[189491]: 2025-12-01 09:17:18.484 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:17:18 compute-0 nova_compute[189491]: 2025-12-01 09:17:18.485 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 09:17:18 compute-0 nova_compute[189491]: 2025-12-01 09:17:18.485 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 09:17:18 compute-0 ovn_controller[97794]: 2025-12-01T09:17:18Z|00034|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Dec  1 09:17:19 compute-0 nova_compute[189491]: 2025-12-01 09:17:19.116 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "refresh_cache-7ed22ffd-011d-48d7-962a-8606e471a59e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 09:17:19 compute-0 nova_compute[189491]: 2025-12-01 09:17:19.117 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquired lock "refresh_cache-7ed22ffd-011d-48d7-962a-8606e471a59e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 09:17:19 compute-0 nova_compute[189491]: 2025-12-01 09:17:19.117 189495 DEBUG nova.network.neutron [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] [instance: 7ed22ffd-011d-48d7-962a-8606e471a59e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  1 09:17:19 compute-0 nova_compute[189491]: 2025-12-01 09:17:19.118 189495 DEBUG nova.objects.instance [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 7ed22ffd-011d-48d7-962a-8606e471a59e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 09:17:19 compute-0 nova_compute[189491]: 2025-12-01 09:17:19.419 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:17:19 compute-0 podman[240228]: 2025-12-01 09:17:19.731306004 +0000 UTC m=+0.095064306 container health_status 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  1 09:17:19 compute-0 podman[240229]: 2025-12-01 09:17:19.752935647 +0000 UTC m=+0.123421521 container health_status 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3)
Dec  1 09:17:20 compute-0 nova_compute[189491]: 2025-12-01 09:17:20.490 189495 DEBUG nova.network.neutron [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] [instance: 7ed22ffd-011d-48d7-962a-8606e471a59e] Updating instance_info_cache with network_info: [{"id": "1632735e-15c5-4d6b-a450-baa001b88ac2", "address": "fa:16:3e:d4:bd:b4", "network": {"id": "52d15875-2a2e-463a-bc5d-8fa6b8466bff", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.55", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fac95b8a995a4174bfa966a8d9d9aa01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1632735e-15", "ovs_interfaceid": "1632735e-15c5-4d6b-a450-baa001b88ac2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 09:17:20 compute-0 nova_compute[189491]: 2025-12-01 09:17:20.555 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Releasing lock "refresh_cache-7ed22ffd-011d-48d7-962a-8606e471a59e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 09:17:20 compute-0 nova_compute[189491]: 2025-12-01 09:17:20.556 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] [instance: 7ed22ffd-011d-48d7-962a-8606e471a59e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  1 09:17:20 compute-0 nova_compute[189491]: 2025-12-01 09:17:20.557 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:17:20 compute-0 nova_compute[189491]: 2025-12-01 09:17:20.557 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:17:20 compute-0 nova_compute[189491]: 2025-12-01 09:17:20.558 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:17:20 compute-0 nova_compute[189491]: 2025-12-01 09:17:20.558 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:17:20 compute-0 nova_compute[189491]: 2025-12-01 09:17:20.559 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 09:17:20 compute-0 nova_compute[189491]: 2025-12-01 09:17:20.714 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:17:20 compute-0 nova_compute[189491]: 2025-12-01 09:17:20.715 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:17:20 compute-0 nova_compute[189491]: 2025-12-01 09:17:20.715 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:17:20 compute-0 nova_compute[189491]: 2025-12-01 09:17:20.716 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec  1 09:17:20 compute-0 nova_compute[189491]: 2025-12-01 09:17:20.870 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec  1 09:17:21 compute-0 nova_compute[189491]: 2025-12-01 09:17:21.864 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:17:22 compute-0 nova_compute[189491]: 2025-12-01 09:17:22.116 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:17:24 compute-0 nova_compute[189491]: 2025-12-01 09:17:24.424 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:17:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:17:26.501 106659 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:17:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:17:26.502 106659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:17:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:17:26.503 106659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:17:27 compute-0 nova_compute[189491]: 2025-12-01 09:17:27.119 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:17:29 compute-0 nova_compute[189491]: 2025-12-01 09:17:29.428 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:17:29 compute-0 podman[203700]: time="2025-12-01T09:17:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 09:17:29 compute-0 podman[203700]: @ - - [01/Dec/2025:09:17:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Dec  1 09:17:29 compute-0 podman[203700]: @ - - [01/Dec/2025:09:17:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4783 "" "Go-http-client/1.1"
Dec  1 09:17:31 compute-0 openstack_network_exporter[205866]: ERROR   09:17:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 09:17:31 compute-0 openstack_network_exporter[205866]: ERROR   09:17:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:17:31 compute-0 openstack_network_exporter[205866]: ERROR   09:17:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:17:31 compute-0 openstack_network_exporter[205866]: ERROR   09:17:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 09:17:31 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:17:31 compute-0 openstack_network_exporter[205866]: ERROR   09:17:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 09:17:31 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:17:31 compute-0 podman[240274]: 2025-12-01 09:17:31.71041186 +0000 UTC m=+0.080354501 container health_status 6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  1 09:17:31 compute-0 podman[240275]: 2025-12-01 09:17:31.716398905 +0000 UTC m=+0.077509593 container health_status ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4)
Dec  1 09:17:32 compute-0 nova_compute[189491]: 2025-12-01 09:17:32.121 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:17:34 compute-0 nova_compute[189491]: 2025-12-01 09:17:34.434 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:17:37 compute-0 nova_compute[189491]: 2025-12-01 09:17:37.125 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:17:37 compute-0 podman[240318]: 2025-12-01 09:17:37.712791576 +0000 UTC m=+0.083468017 container health_status e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible)
Dec  1 09:17:38 compute-0 nova_compute[189491]: 2025-12-01 09:17:38.619 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:17:38 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:17:38.620 106659 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:2b:76', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'f6:fe:a3:90:0a:20'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 09:17:38 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:17:38.622 106659 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  1 09:17:39 compute-0 nova_compute[189491]: 2025-12-01 09:17:39.437 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:17:41 compute-0 podman[240338]: 2025-12-01 09:17:41.742873797 +0000 UTC m=+0.104220188 container health_status dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  1 09:17:41 compute-0 podman[240339]: 2025-12-01 09:17:41.765661958 +0000 UTC m=+0.122297725 container health_status f2d8639519b361376780abb83cb5a5e341c4f412720fb5852b2a4d261a69c359 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, maintainer=Red Hat, Inc., release-0.7.12=, version=9.4, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.expose-services=, build-date=2024-09-18T21:23:30, config_id=edpm, io.openshift.tags=base rhel9, name=ubi9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  1 09:17:42 compute-0 nova_compute[189491]: 2025-12-01 09:17:42.129 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:17:44 compute-0 nova_compute[189491]: 2025-12-01 09:17:44.444 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:17:45 compute-0 nova_compute[189491]: 2025-12-01 09:17:45.112 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:17:45 compute-0 nova_compute[189491]: 2025-12-01 09:17:45.294 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Triggering sync for uuid 7ed22ffd-011d-48d7-962a-8606e471a59e _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Dec  1 09:17:45 compute-0 nova_compute[189491]: 2025-12-01 09:17:45.295 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "7ed22ffd-011d-48d7-962a-8606e471a59e" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:17:45 compute-0 nova_compute[189491]: 2025-12-01 09:17:45.295 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "7ed22ffd-011d-48d7-962a-8606e471a59e" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:17:45 compute-0 nova_compute[189491]: 2025-12-01 09:17:45.333 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "7ed22ffd-011d-48d7-962a-8606e471a59e" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.038s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:17:46 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:17:46.623 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=203a4433-d8f4-4d80-8084-548a6d57cd5d, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:17:46 compute-0 podman[240382]: 2025-12-01 09:17:46.741524455 +0000 UTC m=+0.105010817 container health_status 110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, distribution-scope=public, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., container_name=openstack_network_exporter, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, config_id=edpm, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7)
Dec  1 09:17:46 compute-0 podman[240383]: 2025-12-01 09:17:46.747049528 +0000 UTC m=+0.111289428 container health_status f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, tcib_managed=true)
Dec  1 09:17:47 compute-0 nova_compute[189491]: 2025-12-01 09:17:47.133 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:17:48 compute-0 nova_compute[189491]: 2025-12-01 09:17:48.137 189495 DEBUG oslo_concurrency.lockutils [None req-d86596b7-63b7-4f6e-8c34-085e35de5ce5 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Acquiring lock "11a8e94c-61e3-4805-b688-e4b9121b30ba" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:17:48 compute-0 nova_compute[189491]: 2025-12-01 09:17:48.138 189495 DEBUG oslo_concurrency.lockutils [None req-d86596b7-63b7-4f6e-8c34-085e35de5ce5 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lock "11a8e94c-61e3-4805-b688-e4b9121b30ba" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:17:48 compute-0 nova_compute[189491]: 2025-12-01 09:17:48.165 189495 DEBUG nova.compute.manager [None req-d86596b7-63b7-4f6e-8c34-085e35de5ce5 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 11a8e94c-61e3-4805-b688-e4b9121b30ba] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  1 09:17:48 compute-0 nova_compute[189491]: 2025-12-01 09:17:48.286 189495 DEBUG oslo_concurrency.lockutils [None req-d86596b7-63b7-4f6e-8c34-085e35de5ce5 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:17:48 compute-0 nova_compute[189491]: 2025-12-01 09:17:48.288 189495 DEBUG oslo_concurrency.lockutils [None req-d86596b7-63b7-4f6e-8c34-085e35de5ce5 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:17:48 compute-0 nova_compute[189491]: 2025-12-01 09:17:48.299 189495 DEBUG nova.virt.hardware [None req-d86596b7-63b7-4f6e-8c34-085e35de5ce5 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  1 09:17:48 compute-0 nova_compute[189491]: 2025-12-01 09:17:48.300 189495 INFO nova.compute.claims [None req-d86596b7-63b7-4f6e-8c34-085e35de5ce5 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 11a8e94c-61e3-4805-b688-e4b9121b30ba] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  1 09:17:48 compute-0 nova_compute[189491]: 2025-12-01 09:17:48.471 189495 DEBUG nova.compute.provider_tree [None req-d86596b7-63b7-4f6e-8c34-085e35de5ce5 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Inventory has not changed in ProviderTree for provider: 143c7fe7-af1f-477a-978c-6a994d785d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 09:17:48 compute-0 nova_compute[189491]: 2025-12-01 09:17:48.507 189495 DEBUG nova.scheduler.client.report [None req-d86596b7-63b7-4f6e-8c34-085e35de5ce5 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Inventory has not changed for provider 143c7fe7-af1f-477a-978c-6a994d785d98 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 09:17:48 compute-0 nova_compute[189491]: 2025-12-01 09:17:48.628 189495 DEBUG oslo_concurrency.lockutils [None req-d86596b7-63b7-4f6e-8c34-085e35de5ce5 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.340s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:17:48 compute-0 nova_compute[189491]: 2025-12-01 09:17:48.630 189495 DEBUG nova.compute.manager [None req-d86596b7-63b7-4f6e-8c34-085e35de5ce5 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 11a8e94c-61e3-4805-b688-e4b9121b30ba] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  1 09:17:48 compute-0 nova_compute[189491]: 2025-12-01 09:17:48.897 189495 DEBUG nova.compute.manager [None req-d86596b7-63b7-4f6e-8c34-085e35de5ce5 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 11a8e94c-61e3-4805-b688-e4b9121b30ba] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  1 09:17:48 compute-0 nova_compute[189491]: 2025-12-01 09:17:48.898 189495 DEBUG nova.network.neutron [None req-d86596b7-63b7-4f6e-8c34-085e35de5ce5 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 11a8e94c-61e3-4805-b688-e4b9121b30ba] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  1 09:17:49 compute-0 nova_compute[189491]: 2025-12-01 09:17:49.056 189495 INFO nova.virt.libvirt.driver [None req-d86596b7-63b7-4f6e-8c34-085e35de5ce5 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 11a8e94c-61e3-4805-b688-e4b9121b30ba] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  1 09:17:49 compute-0 nova_compute[189491]: 2025-12-01 09:17:49.198 189495 DEBUG nova.compute.manager [None req-d86596b7-63b7-4f6e-8c34-085e35de5ce5 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 11a8e94c-61e3-4805-b688-e4b9121b30ba] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  1 09:17:49 compute-0 nova_compute[189491]: 2025-12-01 09:17:49.448 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:17:49 compute-0 nova_compute[189491]: 2025-12-01 09:17:49.632 189495 DEBUG nova.compute.manager [None req-d86596b7-63b7-4f6e-8c34-085e35de5ce5 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 11a8e94c-61e3-4805-b688-e4b9121b30ba] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  1 09:17:49 compute-0 nova_compute[189491]: 2025-12-01 09:17:49.634 189495 DEBUG nova.virt.libvirt.driver [None req-d86596b7-63b7-4f6e-8c34-085e35de5ce5 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 11a8e94c-61e3-4805-b688-e4b9121b30ba] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  1 09:17:49 compute-0 nova_compute[189491]: 2025-12-01 09:17:49.635 189495 INFO nova.virt.libvirt.driver [None req-d86596b7-63b7-4f6e-8c34-085e35de5ce5 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 11a8e94c-61e3-4805-b688-e4b9121b30ba] Creating image(s)#033[00m
Dec  1 09:17:49 compute-0 nova_compute[189491]: 2025-12-01 09:17:49.636 189495 DEBUG oslo_concurrency.lockutils [None req-d86596b7-63b7-4f6e-8c34-085e35de5ce5 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Acquiring lock "/var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:17:49 compute-0 nova_compute[189491]: 2025-12-01 09:17:49.637 189495 DEBUG oslo_concurrency.lockutils [None req-d86596b7-63b7-4f6e-8c34-085e35de5ce5 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lock "/var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:17:49 compute-0 nova_compute[189491]: 2025-12-01 09:17:49.638 189495 DEBUG oslo_concurrency.lockutils [None req-d86596b7-63b7-4f6e-8c34-085e35de5ce5 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lock "/var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:17:49 compute-0 nova_compute[189491]: 2025-12-01 09:17:49.651 189495 DEBUG oslo_concurrency.processutils [None req-d86596b7-63b7-4f6e-8c34-085e35de5ce5 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ace19d5a50ba51d9cdb8d0e36f5ab43d2c0f33b5 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:17:49 compute-0 nova_compute[189491]: 2025-12-01 09:17:49.736 189495 DEBUG oslo_concurrency.processutils [None req-d86596b7-63b7-4f6e-8c34-085e35de5ce5 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ace19d5a50ba51d9cdb8d0e36f5ab43d2c0f33b5 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:17:49 compute-0 nova_compute[189491]: 2025-12-01 09:17:49.739 189495 DEBUG oslo_concurrency.lockutils [None req-d86596b7-63b7-4f6e-8c34-085e35de5ce5 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Acquiring lock "ace19d5a50ba51d9cdb8d0e36f5ab43d2c0f33b5" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:17:49 compute-0 nova_compute[189491]: 2025-12-01 09:17:49.741 189495 DEBUG oslo_concurrency.lockutils [None req-d86596b7-63b7-4f6e-8c34-085e35de5ce5 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lock "ace19d5a50ba51d9cdb8d0e36f5ab43d2c0f33b5" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:17:49 compute-0 nova_compute[189491]: 2025-12-01 09:17:49.752 189495 DEBUG oslo_concurrency.processutils [None req-d86596b7-63b7-4f6e-8c34-085e35de5ce5 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ace19d5a50ba51d9cdb8d0e36f5ab43d2c0f33b5 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:17:49 compute-0 nova_compute[189491]: 2025-12-01 09:17:49.830 189495 DEBUG oslo_concurrency.processutils [None req-d86596b7-63b7-4f6e-8c34-085e35de5ce5 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ace19d5a50ba51d9cdb8d0e36f5ab43d2c0f33b5 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:17:49 compute-0 nova_compute[189491]: 2025-12-01 09:17:49.832 189495 DEBUG oslo_concurrency.processutils [None req-d86596b7-63b7-4f6e-8c34-085e35de5ce5 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ace19d5a50ba51d9cdb8d0e36f5ab43d2c0f33b5,backing_fmt=raw /var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:17:49 compute-0 nova_compute[189491]: 2025-12-01 09:17:49.881 189495 DEBUG oslo_concurrency.processutils [None req-d86596b7-63b7-4f6e-8c34-085e35de5ce5 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ace19d5a50ba51d9cdb8d0e36f5ab43d2c0f33b5,backing_fmt=raw /var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk 1073741824" returned: 0 in 0.049s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:17:49 compute-0 nova_compute[189491]: 2025-12-01 09:17:49.883 189495 DEBUG oslo_concurrency.lockutils [None req-d86596b7-63b7-4f6e-8c34-085e35de5ce5 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lock "ace19d5a50ba51d9cdb8d0e36f5ab43d2c0f33b5" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.142s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:17:49 compute-0 nova_compute[189491]: 2025-12-01 09:17:49.883 189495 DEBUG oslo_concurrency.processutils [None req-d86596b7-63b7-4f6e-8c34-085e35de5ce5 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ace19d5a50ba51d9cdb8d0e36f5ab43d2c0f33b5 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:17:49 compute-0 nova_compute[189491]: 2025-12-01 09:17:49.954 189495 DEBUG oslo_concurrency.processutils [None req-d86596b7-63b7-4f6e-8c34-085e35de5ce5 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ace19d5a50ba51d9cdb8d0e36f5ab43d2c0f33b5 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:17:49 compute-0 nova_compute[189491]: 2025-12-01 09:17:49.956 189495 DEBUG nova.virt.disk.api [None req-d86596b7-63b7-4f6e-8c34-085e35de5ce5 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Checking if we can resize image /var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Dec  1 09:17:49 compute-0 nova_compute[189491]: 2025-12-01 09:17:49.956 189495 DEBUG oslo_concurrency.processutils [None req-d86596b7-63b7-4f6e-8c34-085e35de5ce5 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:17:50 compute-0 nova_compute[189491]: 2025-12-01 09:17:50.024 189495 DEBUG oslo_concurrency.processutils [None req-d86596b7-63b7-4f6e-8c34-085e35de5ce5 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:17:50 compute-0 nova_compute[189491]: 2025-12-01 09:17:50.025 189495 DEBUG nova.virt.disk.api [None req-d86596b7-63b7-4f6e-8c34-085e35de5ce5 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Cannot resize image /var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Dec  1 09:17:50 compute-0 nova_compute[189491]: 2025-12-01 09:17:50.026 189495 DEBUG nova.objects.instance [None req-d86596b7-63b7-4f6e-8c34-085e35de5ce5 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lazy-loading 'migration_context' on Instance uuid 11a8e94c-61e3-4805-b688-e4b9121b30ba obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 09:17:50 compute-0 nova_compute[189491]: 2025-12-01 09:17:50.164 189495 DEBUG oslo_concurrency.lockutils [None req-d86596b7-63b7-4f6e-8c34-085e35de5ce5 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Acquiring lock "/var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:17:50 compute-0 nova_compute[189491]: 2025-12-01 09:17:50.165 189495 DEBUG oslo_concurrency.lockutils [None req-d86596b7-63b7-4f6e-8c34-085e35de5ce5 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lock "/var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:17:50 compute-0 nova_compute[189491]: 2025-12-01 09:17:50.166 189495 DEBUG oslo_concurrency.lockutils [None req-d86596b7-63b7-4f6e-8c34-085e35de5ce5 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lock "/var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:17:50 compute-0 nova_compute[189491]: 2025-12-01 09:17:50.181 189495 DEBUG oslo_concurrency.processutils [None req-d86596b7-63b7-4f6e-8c34-085e35de5ce5 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:17:50 compute-0 nova_compute[189491]: 2025-12-01 09:17:50.247 189495 DEBUG oslo_concurrency.processutils [None req-d86596b7-63b7-4f6e-8c34-085e35de5ce5 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:17:50 compute-0 nova_compute[189491]: 2025-12-01 09:17:50.248 189495 DEBUG oslo_concurrency.lockutils [None req-d86596b7-63b7-4f6e-8c34-085e35de5ce5 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:17:50 compute-0 nova_compute[189491]: 2025-12-01 09:17:50.249 189495 DEBUG oslo_concurrency.lockutils [None req-d86596b7-63b7-4f6e-8c34-085e35de5ce5 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:17:50 compute-0 nova_compute[189491]: 2025-12-01 09:17:50.261 189495 DEBUG oslo_concurrency.processutils [None req-d86596b7-63b7-4f6e-8c34-085e35de5ce5 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:17:50 compute-0 nova_compute[189491]: 2025-12-01 09:17:50.315 189495 DEBUG nova.network.neutron [None req-d86596b7-63b7-4f6e-8c34-085e35de5ce5 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 11a8e94c-61e3-4805-b688-e4b9121b30ba] Successfully updated port: 213d57d5-9e28-4606-937a-97375a401f82 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  1 09:17:50 compute-0 nova_compute[189491]: 2025-12-01 09:17:50.321 189495 DEBUG oslo_concurrency.processutils [None req-d86596b7-63b7-4f6e-8c34-085e35de5ce5 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:17:50 compute-0 nova_compute[189491]: 2025-12-01 09:17:50.321 189495 DEBUG oslo_concurrency.processutils [None req-d86596b7-63b7-4f6e-8c34-085e35de5ce5 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:17:50 compute-0 nova_compute[189491]: 2025-12-01 09:17:50.433 189495 DEBUG nova.compute.manager [req-dd515717-e3d1-44cf-b2b2-c75850d267a4 req-47c8093f-3554-4e13-8524-ed9913c35971 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 11a8e94c-61e3-4805-b688-e4b9121b30ba] Received event network-changed-213d57d5-9e28-4606-937a-97375a401f82 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 09:17:50 compute-0 nova_compute[189491]: 2025-12-01 09:17:50.434 189495 DEBUG nova.compute.manager [req-dd515717-e3d1-44cf-b2b2-c75850d267a4 req-47c8093f-3554-4e13-8524-ed9913c35971 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 11a8e94c-61e3-4805-b688-e4b9121b30ba] Refreshing instance network info cache due to event network-changed-213d57d5-9e28-4606-937a-97375a401f82. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  1 09:17:50 compute-0 nova_compute[189491]: 2025-12-01 09:17:50.435 189495 DEBUG oslo_concurrency.lockutils [req-dd515717-e3d1-44cf-b2b2-c75850d267a4 req-47c8093f-3554-4e13-8524-ed9913c35971 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Acquiring lock "refresh_cache-11a8e94c-61e3-4805-b688-e4b9121b30ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 09:17:50 compute-0 nova_compute[189491]: 2025-12-01 09:17:50.436 189495 DEBUG oslo_concurrency.lockutils [req-dd515717-e3d1-44cf-b2b2-c75850d267a4 req-47c8093f-3554-4e13-8524-ed9913c35971 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Acquired lock "refresh_cache-11a8e94c-61e3-4805-b688-e4b9121b30ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 09:17:50 compute-0 nova_compute[189491]: 2025-12-01 09:17:50.436 189495 DEBUG nova.network.neutron [req-dd515717-e3d1-44cf-b2b2-c75850d267a4 req-47c8093f-3554-4e13-8524-ed9913c35971 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 11a8e94c-61e3-4805-b688-e4b9121b30ba] Refreshing network info cache for port 213d57d5-9e28-4606-937a-97375a401f82 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  1 09:17:50 compute-0 nova_compute[189491]: 2025-12-01 09:17:50.439 189495 DEBUG oslo_concurrency.lockutils [None req-d86596b7-63b7-4f6e-8c34-085e35de5ce5 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Acquiring lock "refresh_cache-11a8e94c-61e3-4805-b688-e4b9121b30ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 09:17:50 compute-0 nova_compute[189491]: 2025-12-01 09:17:50.444 189495 DEBUG oslo_concurrency.processutils [None req-d86596b7-63b7-4f6e-8c34-085e35de5ce5 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.eph0 1073741824" returned: 0 in 0.123s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:17:50 compute-0 nova_compute[189491]: 2025-12-01 09:17:50.445 189495 DEBUG oslo_concurrency.lockutils [None req-d86596b7-63b7-4f6e-8c34-085e35de5ce5 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.196s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:17:50 compute-0 nova_compute[189491]: 2025-12-01 09:17:50.446 189495 DEBUG oslo_concurrency.processutils [None req-d86596b7-63b7-4f6e-8c34-085e35de5ce5 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:17:50 compute-0 nova_compute[189491]: 2025-12-01 09:17:50.514 189495 DEBUG oslo_concurrency.processutils [None req-d86596b7-63b7-4f6e-8c34-085e35de5ce5 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:17:50 compute-0 nova_compute[189491]: 2025-12-01 09:17:50.515 189495 DEBUG nova.virt.libvirt.driver [None req-d86596b7-63b7-4f6e-8c34-085e35de5ce5 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 11a8e94c-61e3-4805-b688-e4b9121b30ba] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  1 09:17:50 compute-0 nova_compute[189491]: 2025-12-01 09:17:50.515 189495 DEBUG nova.virt.libvirt.driver [None req-d86596b7-63b7-4f6e-8c34-085e35de5ce5 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 11a8e94c-61e3-4805-b688-e4b9121b30ba] Ensure instance console log exists: /var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  1 09:17:50 compute-0 nova_compute[189491]: 2025-12-01 09:17:50.515 189495 DEBUG oslo_concurrency.lockutils [None req-d86596b7-63b7-4f6e-8c34-085e35de5ce5 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:17:50 compute-0 nova_compute[189491]: 2025-12-01 09:17:50.516 189495 DEBUG oslo_concurrency.lockutils [None req-d86596b7-63b7-4f6e-8c34-085e35de5ce5 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:17:50 compute-0 nova_compute[189491]: 2025-12-01 09:17:50.516 189495 DEBUG oslo_concurrency.lockutils [None req-d86596b7-63b7-4f6e-8c34-085e35de5ce5 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:17:50 compute-0 nova_compute[189491]: 2025-12-01 09:17:50.565 189495 DEBUG nova.network.neutron [req-dd515717-e3d1-44cf-b2b2-c75850d267a4 req-47c8093f-3554-4e13-8524-ed9913c35971 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 11a8e94c-61e3-4805-b688-e4b9121b30ba] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  1 09:17:50 compute-0 podman[240449]: 2025-12-01 09:17:50.695771623 +0000 UTC m=+0.066166059 container health_status 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true)
Dec  1 09:17:50 compute-0 podman[240450]: 2025-12-01 09:17:50.73787177 +0000 UTC m=+0.102625730 container health_status 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec  1 09:17:51 compute-0 nova_compute[189491]: 2025-12-01 09:17:51.243 189495 DEBUG nova.network.neutron [req-dd515717-e3d1-44cf-b2b2-c75850d267a4 req-47c8093f-3554-4e13-8524-ed9913c35971 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 11a8e94c-61e3-4805-b688-e4b9121b30ba] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 09:17:51 compute-0 nova_compute[189491]: 2025-12-01 09:17:51.403 189495 DEBUG oslo_concurrency.lockutils [req-dd515717-e3d1-44cf-b2b2-c75850d267a4 req-47c8093f-3554-4e13-8524-ed9913c35971 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Releasing lock "refresh_cache-11a8e94c-61e3-4805-b688-e4b9121b30ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 09:17:51 compute-0 nova_compute[189491]: 2025-12-01 09:17:51.405 189495 DEBUG oslo_concurrency.lockutils [None req-d86596b7-63b7-4f6e-8c34-085e35de5ce5 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Acquired lock "refresh_cache-11a8e94c-61e3-4805-b688-e4b9121b30ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 09:17:51 compute-0 nova_compute[189491]: 2025-12-01 09:17:51.406 189495 DEBUG nova.network.neutron [None req-d86596b7-63b7-4f6e-8c34-085e35de5ce5 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 11a8e94c-61e3-4805-b688-e4b9121b30ba] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  1 09:17:52 compute-0 nova_compute[189491]: 2025-12-01 09:17:52.136 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:17:52 compute-0 nova_compute[189491]: 2025-12-01 09:17:52.190 189495 DEBUG nova.network.neutron [None req-d86596b7-63b7-4f6e-8c34-085e35de5ce5 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 11a8e94c-61e3-4805-b688-e4b9121b30ba] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  1 09:17:53 compute-0 nova_compute[189491]: 2025-12-01 09:17:53.433 189495 DEBUG nova.network.neutron [None req-d86596b7-63b7-4f6e-8c34-085e35de5ce5 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 11a8e94c-61e3-4805-b688-e4b9121b30ba] Updating instance_info_cache with network_info: [{"id": "213d57d5-9e28-4606-937a-97375a401f82", "address": "fa:16:3e:03:b9:7c", "network": {"id": "52d15875-2a2e-463a-bc5d-8fa6b8466bff", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.178", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fac95b8a995a4174bfa966a8d9d9aa01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap213d57d5-9e", "ovs_interfaceid": "213d57d5-9e28-4606-937a-97375a401f82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 09:17:53 compute-0 nova_compute[189491]: 2025-12-01 09:17:53.877 189495 DEBUG oslo_concurrency.lockutils [None req-d86596b7-63b7-4f6e-8c34-085e35de5ce5 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Releasing lock "refresh_cache-11a8e94c-61e3-4805-b688-e4b9121b30ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 09:17:53 compute-0 nova_compute[189491]: 2025-12-01 09:17:53.878 189495 DEBUG nova.compute.manager [None req-d86596b7-63b7-4f6e-8c34-085e35de5ce5 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 11a8e94c-61e3-4805-b688-e4b9121b30ba] Instance network_info: |[{"id": "213d57d5-9e28-4606-937a-97375a401f82", "address": "fa:16:3e:03:b9:7c", "network": {"id": "52d15875-2a2e-463a-bc5d-8fa6b8466bff", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.178", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fac95b8a995a4174bfa966a8d9d9aa01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap213d57d5-9e", "ovs_interfaceid": "213d57d5-9e28-4606-937a-97375a401f82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  1 09:17:53 compute-0 nova_compute[189491]: 2025-12-01 09:17:53.884 189495 DEBUG nova.virt.libvirt.driver [None req-d86596b7-63b7-4f6e-8c34-085e35de5ce5 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 11a8e94c-61e3-4805-b688-e4b9121b30ba] Start _get_guest_xml network_info=[{"id": "213d57d5-9e28-4606-937a-97375a401f82", "address": "fa:16:3e:03:b9:7c", "network": {"id": "52d15875-2a2e-463a-bc5d-8fa6b8466bff", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.178", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fac95b8a995a4174bfa966a8d9d9aa01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap213d57d5-9e", "ovs_interfaceid": "213d57d5-9e28-4606-937a-97375a401f82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-01T09:15:08Z,direct_url=<?>,disk_format='qcow2',id=304c689d-2799-45ae-8166-517d5fd107b2,min_disk=0,min_ram=0,name='cirros',owner='fac95b8a995a4174bfa966a8d9d9aa01',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-01T09:15:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'guest_format': None, 'encryption_options': None, 'device_type': 'disk', 'encryption_secret_uuid': None, 'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'image_id': '304c689d-2799-45ae-8166-517d5fd107b2'}], 'ephemerals': [{'size': 1, 'encrypted': False, 'guest_format': None, 'encryption_options': None, 'device_type': 'disk', 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'device_name': '/dev/vdb', 'encryption_format': None}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  1 09:17:53 compute-0 nova_compute[189491]: 2025-12-01 09:17:53.896 189495 WARNING nova.virt.libvirt.driver [None req-d86596b7-63b7-4f6e-8c34-085e35de5ce5 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 09:17:53 compute-0 nova_compute[189491]: 2025-12-01 09:17:53.904 189495 DEBUG nova.virt.libvirt.host [None req-d86596b7-63b7-4f6e-8c34-085e35de5ce5 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  1 09:17:53 compute-0 nova_compute[189491]: 2025-12-01 09:17:53.905 189495 DEBUG nova.virt.libvirt.host [None req-d86596b7-63b7-4f6e-8c34-085e35de5ce5 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  1 09:17:53 compute-0 nova_compute[189491]: 2025-12-01 09:17:53.910 189495 DEBUG nova.virt.libvirt.host [None req-d86596b7-63b7-4f6e-8c34-085e35de5ce5 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  1 09:17:53 compute-0 nova_compute[189491]: 2025-12-01 09:17:53.911 189495 DEBUG nova.virt.libvirt.host [None req-d86596b7-63b7-4f6e-8c34-085e35de5ce5 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  1 09:17:53 compute-0 nova_compute[189491]: 2025-12-01 09:17:53.912 189495 DEBUG nova.virt.libvirt.driver [None req-d86596b7-63b7-4f6e-8c34-085e35de5ce5 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  1 09:17:53 compute-0 nova_compute[189491]: 2025-12-01 09:17:53.912 189495 DEBUG nova.virt.hardware [None req-d86596b7-63b7-4f6e-8c34-085e35de5ce5 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-01T09:15:13Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='719a52fe-7f4b-48c0-b9dc-6a91d4ec600c',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-01T09:15:08Z,direct_url=<?>,disk_format='qcow2',id=304c689d-2799-45ae-8166-517d5fd107b2,min_disk=0,min_ram=0,name='cirros',owner='fac95b8a995a4174bfa966a8d9d9aa01',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-01T09:15:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  1 09:17:53 compute-0 nova_compute[189491]: 2025-12-01 09:17:53.913 189495 DEBUG nova.virt.hardware [None req-d86596b7-63b7-4f6e-8c34-085e35de5ce5 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  1 09:17:53 compute-0 nova_compute[189491]: 2025-12-01 09:17:53.914 189495 DEBUG nova.virt.hardware [None req-d86596b7-63b7-4f6e-8c34-085e35de5ce5 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  1 09:17:53 compute-0 nova_compute[189491]: 2025-12-01 09:17:53.914 189495 DEBUG nova.virt.hardware [None req-d86596b7-63b7-4f6e-8c34-085e35de5ce5 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  1 09:17:53 compute-0 nova_compute[189491]: 2025-12-01 09:17:53.915 189495 DEBUG nova.virt.hardware [None req-d86596b7-63b7-4f6e-8c34-085e35de5ce5 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  1 09:17:53 compute-0 nova_compute[189491]: 2025-12-01 09:17:53.915 189495 DEBUG nova.virt.hardware [None req-d86596b7-63b7-4f6e-8c34-085e35de5ce5 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  1 09:17:53 compute-0 nova_compute[189491]: 2025-12-01 09:17:53.916 189495 DEBUG nova.virt.hardware [None req-d86596b7-63b7-4f6e-8c34-085e35de5ce5 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  1 09:17:53 compute-0 nova_compute[189491]: 2025-12-01 09:17:53.917 189495 DEBUG nova.virt.hardware [None req-d86596b7-63b7-4f6e-8c34-085e35de5ce5 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  1 09:17:53 compute-0 nova_compute[189491]: 2025-12-01 09:17:53.918 189495 DEBUG nova.virt.hardware [None req-d86596b7-63b7-4f6e-8c34-085e35de5ce5 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  1 09:17:53 compute-0 nova_compute[189491]: 2025-12-01 09:17:53.919 189495 DEBUG nova.virt.hardware [None req-d86596b7-63b7-4f6e-8c34-085e35de5ce5 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  1 09:17:53 compute-0 nova_compute[189491]: 2025-12-01 09:17:53.920 189495 DEBUG nova.virt.hardware [None req-d86596b7-63b7-4f6e-8c34-085e35de5ce5 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  1 09:17:53 compute-0 nova_compute[189491]: 2025-12-01 09:17:53.929 189495 DEBUG nova.virt.libvirt.vif [None req-d86596b7-63b7-4f6e-8c34-085e35de5ce5 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-01T09:17:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-a75cfa3-6buvcyjxf2ua-hietjgfclklq-vnf-3mwygpaab4vh',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-a75cfa3-6buvcyjxf2ua-hietjgfclklq-vnf-3mwygpaab4vh',id=2,image_ref='304c689d-2799-45ae-8166-517d5fd107b2',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='1555a697-b0aa-4429-98e7-26e6671e228d'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fac95b8a995a4174bfa966a8d9d9aa01',ramdisk_id='',reservation_id='r-7mhbbi8t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member',image_base_image_ref='304c689d-2799-45ae-8166-517d5fd107b2',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-01T09:17:49Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT01MzYyNjc3MjU0NzcxMTg0OTcyPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTUzNjI2NzcyNTQ3NzExODQ5NzI9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09NTM2MjY3NzI1NDc3MTE4NDk3Mj09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTUzNjI2NzcyNTQ3NzExODQ5NzI9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT01MzYyNjc3MjU0NzcxMTg0OTcyPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT01MzYyNjc3MjU0NzcxMTg0OTcyPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjI
Dec  1 09:17:53 compute-0 nova_compute[189491]: ywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09NTM2MjY3NzI1NDc3MTE4NDk3Mj09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTUzNjI2NzcyNTQ3NzExODQ5NzI9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT01MzYyNjc3MjU0NzcxMTg0OTcyPT0tLQo=',user_id='962a55152ff34fdda5eae1f8aee3a7a9',uuid=11a8e94c-61e3-4805-b688-e4b9121b30ba,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "213d57d5-9e28-4606-937a-97375a401f82", "address": "fa:16:3e:03:b9:7c", "network": {"id": "52d15875-2a2e-463a-bc5d-8fa6b8466bff", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.178", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fac95b8a995a4174bfa966a8d9d9aa01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap213d57d5-9e", "ovs_interfaceid": "213d57d5-9e28-4606-937a-97375a401f82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  1 09:17:53 compute-0 nova_compute[189491]: 2025-12-01 09:17:53.931 189495 DEBUG nova.network.os_vif_util [None req-d86596b7-63b7-4f6e-8c34-085e35de5ce5 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Converting VIF {"id": "213d57d5-9e28-4606-937a-97375a401f82", "address": "fa:16:3e:03:b9:7c", "network": {"id": "52d15875-2a2e-463a-bc5d-8fa6b8466bff", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.178", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fac95b8a995a4174bfa966a8d9d9aa01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap213d57d5-9e", "ovs_interfaceid": "213d57d5-9e28-4606-937a-97375a401f82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 09:17:53 compute-0 nova_compute[189491]: 2025-12-01 09:17:53.933 189495 DEBUG nova.network.os_vif_util [None req-d86596b7-63b7-4f6e-8c34-085e35de5ce5 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:03:b9:7c,bridge_name='br-int',has_traffic_filtering=True,id=213d57d5-9e28-4606-937a-97375a401f82,network=Network(52d15875-2a2e-463a-bc5d-8fa6b8466bff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap213d57d5-9e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 09:17:53 compute-0 nova_compute[189491]: 2025-12-01 09:17:53.935 189495 DEBUG nova.objects.instance [None req-d86596b7-63b7-4f6e-8c34-085e35de5ce5 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lazy-loading 'pci_devices' on Instance uuid 11a8e94c-61e3-4805-b688-e4b9121b30ba obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 09:17:54 compute-0 nova_compute[189491]: 2025-12-01 09:17:54.111 189495 DEBUG nova.virt.libvirt.driver [None req-d86596b7-63b7-4f6e-8c34-085e35de5ce5 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 11a8e94c-61e3-4805-b688-e4b9121b30ba] End _get_guest_xml xml=<domain type="kvm">
Dec  1 09:17:54 compute-0 nova_compute[189491]:  <uuid>11a8e94c-61e3-4805-b688-e4b9121b30ba</uuid>
Dec  1 09:17:54 compute-0 nova_compute[189491]:  <name>instance-00000002</name>
Dec  1 09:17:54 compute-0 nova_compute[189491]:  <memory>524288</memory>
Dec  1 09:17:54 compute-0 nova_compute[189491]:  <vcpu>1</vcpu>
Dec  1 09:17:54 compute-0 nova_compute[189491]:  <metadata>
Dec  1 09:17:54 compute-0 nova_compute[189491]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  1 09:17:54 compute-0 nova_compute[189491]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  1 09:17:54 compute-0 nova_compute[189491]:      <nova:name>vn-a75cfa3-6buvcyjxf2ua-hietjgfclklq-vnf-3mwygpaab4vh</nova:name>
Dec  1 09:17:54 compute-0 nova_compute[189491]:      <nova:creationTime>2025-12-01 09:17:53</nova:creationTime>
Dec  1 09:17:54 compute-0 nova_compute[189491]:      <nova:flavor name="m1.small">
Dec  1 09:17:54 compute-0 nova_compute[189491]:        <nova:memory>512</nova:memory>
Dec  1 09:17:54 compute-0 nova_compute[189491]:        <nova:disk>1</nova:disk>
Dec  1 09:17:54 compute-0 nova_compute[189491]:        <nova:swap>0</nova:swap>
Dec  1 09:17:54 compute-0 nova_compute[189491]:        <nova:ephemeral>1</nova:ephemeral>
Dec  1 09:17:54 compute-0 nova_compute[189491]:        <nova:vcpus>1</nova:vcpus>
Dec  1 09:17:54 compute-0 nova_compute[189491]:      </nova:flavor>
Dec  1 09:17:54 compute-0 nova_compute[189491]:      <nova:owner>
Dec  1 09:17:54 compute-0 nova_compute[189491]:        <nova:user uuid="962a55152ff34fdda5eae1f8aee3a7a9">admin</nova:user>
Dec  1 09:17:54 compute-0 nova_compute[189491]:        <nova:project uuid="fac95b8a995a4174bfa966a8d9d9aa01">admin</nova:project>
Dec  1 09:17:54 compute-0 nova_compute[189491]:      </nova:owner>
Dec  1 09:17:54 compute-0 nova_compute[189491]:      <nova:root type="image" uuid="304c689d-2799-45ae-8166-517d5fd107b2"/>
Dec  1 09:17:54 compute-0 nova_compute[189491]:      <nova:ports>
Dec  1 09:17:54 compute-0 nova_compute[189491]:        <nova:port uuid="213d57d5-9e28-4606-937a-97375a401f82">
Dec  1 09:17:54 compute-0 nova_compute[189491]:          <nova:ip type="fixed" address="192.168.0.178" ipVersion="4"/>
Dec  1 09:17:54 compute-0 nova_compute[189491]:        </nova:port>
Dec  1 09:17:54 compute-0 nova_compute[189491]:      </nova:ports>
Dec  1 09:17:54 compute-0 nova_compute[189491]:    </nova:instance>
Dec  1 09:17:54 compute-0 nova_compute[189491]:  </metadata>
Dec  1 09:17:54 compute-0 nova_compute[189491]:  <sysinfo type="smbios">
Dec  1 09:17:54 compute-0 nova_compute[189491]:    <system>
Dec  1 09:17:54 compute-0 nova_compute[189491]:      <entry name="manufacturer">RDO</entry>
Dec  1 09:17:54 compute-0 nova_compute[189491]:      <entry name="product">OpenStack Compute</entry>
Dec  1 09:17:54 compute-0 nova_compute[189491]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  1 09:17:54 compute-0 nova_compute[189491]:      <entry name="serial">11a8e94c-61e3-4805-b688-e4b9121b30ba</entry>
Dec  1 09:17:54 compute-0 nova_compute[189491]:      <entry name="uuid">11a8e94c-61e3-4805-b688-e4b9121b30ba</entry>
Dec  1 09:17:54 compute-0 nova_compute[189491]:      <entry name="family">Virtual Machine</entry>
Dec  1 09:17:54 compute-0 nova_compute[189491]:    </system>
Dec  1 09:17:54 compute-0 nova_compute[189491]:  </sysinfo>
Dec  1 09:17:54 compute-0 nova_compute[189491]:  <os>
Dec  1 09:17:54 compute-0 nova_compute[189491]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  1 09:17:54 compute-0 nova_compute[189491]:    <boot dev="hd"/>
Dec  1 09:17:54 compute-0 nova_compute[189491]:    <smbios mode="sysinfo"/>
Dec  1 09:17:54 compute-0 nova_compute[189491]:  </os>
Dec  1 09:17:54 compute-0 nova_compute[189491]:  <features>
Dec  1 09:17:54 compute-0 nova_compute[189491]:    <acpi/>
Dec  1 09:17:54 compute-0 nova_compute[189491]:    <apic/>
Dec  1 09:17:54 compute-0 nova_compute[189491]:    <vmcoreinfo/>
Dec  1 09:17:54 compute-0 nova_compute[189491]:  </features>
Dec  1 09:17:54 compute-0 nova_compute[189491]:  <clock offset="utc">
Dec  1 09:17:54 compute-0 nova_compute[189491]:    <timer name="pit" tickpolicy="delay"/>
Dec  1 09:17:54 compute-0 nova_compute[189491]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  1 09:17:54 compute-0 nova_compute[189491]:    <timer name="hpet" present="no"/>
Dec  1 09:17:54 compute-0 nova_compute[189491]:  </clock>
Dec  1 09:17:54 compute-0 nova_compute[189491]:  <cpu mode="host-model" match="exact">
Dec  1 09:17:54 compute-0 nova_compute[189491]:    <topology sockets="1" cores="1" threads="1"/>
Dec  1 09:17:54 compute-0 nova_compute[189491]:  </cpu>
Dec  1 09:17:54 compute-0 nova_compute[189491]:  <devices>
Dec  1 09:17:54 compute-0 nova_compute[189491]:    <disk type="file" device="disk">
Dec  1 09:17:54 compute-0 nova_compute[189491]:      <driver name="qemu" type="qcow2" cache="none"/>
Dec  1 09:17:54 compute-0 nova_compute[189491]:      <source file="/var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk"/>
Dec  1 09:17:54 compute-0 nova_compute[189491]:      <target dev="vda" bus="virtio"/>
Dec  1 09:17:54 compute-0 nova_compute[189491]:    </disk>
Dec  1 09:17:54 compute-0 nova_compute[189491]:    <disk type="file" device="disk">
Dec  1 09:17:54 compute-0 nova_compute[189491]:      <driver name="qemu" type="qcow2" cache="none"/>
Dec  1 09:17:54 compute-0 nova_compute[189491]:      <source file="/var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.eph0"/>
Dec  1 09:17:54 compute-0 nova_compute[189491]:      <target dev="vdb" bus="virtio"/>
Dec  1 09:17:54 compute-0 nova_compute[189491]:    </disk>
Dec  1 09:17:54 compute-0 nova_compute[189491]:    <disk type="file" device="cdrom">
Dec  1 09:17:54 compute-0 nova_compute[189491]:      <driver name="qemu" type="raw" cache="none"/>
Dec  1 09:17:54 compute-0 nova_compute[189491]:      <source file="/var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.config"/>
Dec  1 09:17:54 compute-0 nova_compute[189491]:      <target dev="sda" bus="sata"/>
Dec  1 09:17:54 compute-0 nova_compute[189491]:    </disk>
Dec  1 09:17:54 compute-0 nova_compute[189491]:    <interface type="ethernet">
Dec  1 09:17:54 compute-0 nova_compute[189491]:      <mac address="fa:16:3e:03:b9:7c"/>
Dec  1 09:17:54 compute-0 nova_compute[189491]:      <model type="virtio"/>
Dec  1 09:17:54 compute-0 nova_compute[189491]:      <driver name="vhost" rx_queue_size="512"/>
Dec  1 09:17:54 compute-0 nova_compute[189491]:      <mtu size="1442"/>
Dec  1 09:17:54 compute-0 nova_compute[189491]:      <target dev="tap213d57d5-9e"/>
Dec  1 09:17:54 compute-0 nova_compute[189491]:    </interface>
Dec  1 09:17:54 compute-0 nova_compute[189491]:    <serial type="pty">
Dec  1 09:17:54 compute-0 nova_compute[189491]:      <log file="/var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/console.log" append="off"/>
Dec  1 09:17:54 compute-0 nova_compute[189491]:    </serial>
Dec  1 09:17:54 compute-0 nova_compute[189491]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  1 09:17:54 compute-0 nova_compute[189491]:    <video>
Dec  1 09:17:54 compute-0 nova_compute[189491]:      <model type="virtio"/>
Dec  1 09:17:54 compute-0 nova_compute[189491]:    </video>
Dec  1 09:17:54 compute-0 nova_compute[189491]:    <input type="tablet" bus="usb"/>
Dec  1 09:17:54 compute-0 nova_compute[189491]:    <rng model="virtio">
Dec  1 09:17:54 compute-0 nova_compute[189491]:      <backend model="random">/dev/urandom</backend>
Dec  1 09:17:54 compute-0 nova_compute[189491]:    </rng>
Dec  1 09:17:54 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root"/>
Dec  1 09:17:54 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:17:54 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:17:54 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:17:54 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:17:54 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:17:54 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:17:54 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:17:54 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:17:54 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:17:54 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:17:54 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:17:54 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:17:54 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:17:54 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:17:54 compute-0 rsyslogd[236849]: message too long (8192) with configured size 8096, begin of message is: 2025-12-01 09:17:53.929 189495 DEBUG nova.virt.libvirt.vif [None req-d86596b7-63 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec  1 09:17:54 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:17:54 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:17:54 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:17:54 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:17:54 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:17:54 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:17:54 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:17:54 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:17:54 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:17:54 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:17:54 compute-0 nova_compute[189491]:    <controller type="usb" index="0"/>
Dec  1 09:17:54 compute-0 nova_compute[189491]:    <memballoon model="virtio">
Dec  1 09:17:54 compute-0 nova_compute[189491]:      <stats period="10"/>
Dec  1 09:17:54 compute-0 nova_compute[189491]:    </memballoon>
Dec  1 09:17:54 compute-0 nova_compute[189491]:  </devices>
Dec  1 09:17:54 compute-0 nova_compute[189491]: </domain>
Dec  1 09:17:54 compute-0 nova_compute[189491]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  1 09:17:54 compute-0 nova_compute[189491]: 2025-12-01 09:17:54.113 189495 DEBUG nova.compute.manager [None req-d86596b7-63b7-4f6e-8c34-085e35de5ce5 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 11a8e94c-61e3-4805-b688-e4b9121b30ba] Preparing to wait for external event network-vif-plugged-213d57d5-9e28-4606-937a-97375a401f82 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  1 09:17:54 compute-0 nova_compute[189491]: 2025-12-01 09:17:54.114 189495 DEBUG oslo_concurrency.lockutils [None req-d86596b7-63b7-4f6e-8c34-085e35de5ce5 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Acquiring lock "11a8e94c-61e3-4805-b688-e4b9121b30ba-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:17:54 compute-0 nova_compute[189491]: 2025-12-01 09:17:54.115 189495 DEBUG oslo_concurrency.lockutils [None req-d86596b7-63b7-4f6e-8c34-085e35de5ce5 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lock "11a8e94c-61e3-4805-b688-e4b9121b30ba-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:17:54 compute-0 nova_compute[189491]: 2025-12-01 09:17:54.115 189495 DEBUG oslo_concurrency.lockutils [None req-d86596b7-63b7-4f6e-8c34-085e35de5ce5 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lock "11a8e94c-61e3-4805-b688-e4b9121b30ba-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:17:54 compute-0 nova_compute[189491]: 2025-12-01 09:17:54.117 189495 DEBUG nova.virt.libvirt.vif [None req-d86596b7-63b7-4f6e-8c34-085e35de5ce5 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-01T09:17:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-a75cfa3-6buvcyjxf2ua-hietjgfclklq-vnf-3mwygpaab4vh',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-a75cfa3-6buvcyjxf2ua-hietjgfclklq-vnf-3mwygpaab4vh',id=2,image_ref='304c689d-2799-45ae-8166-517d5fd107b2',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='1555a697-b0aa-4429-98e7-26e6671e228d'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fac95b8a995a4174bfa966a8d9d9aa01',ramdisk_id='',reservation_id='r-7mhbbi8t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member',image_base_image_ref='304c689d-2799-45ae-8166-517d5fd107b2',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-01T09:17:49Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT01MzYyNjc3MjU0NzcxMTg0OTcyPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTUzNjI2NzcyNTQ3NzExODQ5NzI9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09NTM2MjY3NzI1NDc3MTE4NDk3Mj09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTUzNjI2NzcyNTQ3NzExODQ5NzI9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT01MzYyNjc3MjU0NzcxMTg0OTcyPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT01MzYyNjc3MjU0NzcxMTg0OTcyPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJ
Dec  1 09:17:54 compute-0 nova_compute[189491]: wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09NTM2MjY3NzI1NDc3MTE4NDk3Mj09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTUzNjI2NzcyNTQ3NzExODQ5NzI9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT01MzYyNjc3MjU0NzcxMTg0OTcyPT0tLQo=',user_id='962a55152ff34fdda5eae1f8aee3a7a9',uuid=11a8e94c-61e3-4805-b688-e4b9121b30ba,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "213d57d5-9e28-4606-937a-97375a401f82", "address": "fa:16:3e:03:b9:7c", "network": {"id": "52d15875-2a2e-463a-bc5d-8fa6b8466bff", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.178", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fac95b8a995a4174bfa966a8d9d9aa01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap213d57d5-9e", "ovs_interfaceid": "213d57d5-9e28-4606-937a-97375a401f82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  1 09:17:54 compute-0 nova_compute[189491]: 2025-12-01 09:17:54.117 189495 DEBUG nova.network.os_vif_util [None req-d86596b7-63b7-4f6e-8c34-085e35de5ce5 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Converting VIF {"id": "213d57d5-9e28-4606-937a-97375a401f82", "address": "fa:16:3e:03:b9:7c", "network": {"id": "52d15875-2a2e-463a-bc5d-8fa6b8466bff", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.178", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fac95b8a995a4174bfa966a8d9d9aa01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap213d57d5-9e", "ovs_interfaceid": "213d57d5-9e28-4606-937a-97375a401f82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 09:17:54 compute-0 nova_compute[189491]: 2025-12-01 09:17:54.119 189495 DEBUG nova.network.os_vif_util [None req-d86596b7-63b7-4f6e-8c34-085e35de5ce5 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:03:b9:7c,bridge_name='br-int',has_traffic_filtering=True,id=213d57d5-9e28-4606-937a-97375a401f82,network=Network(52d15875-2a2e-463a-bc5d-8fa6b8466bff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap213d57d5-9e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 09:17:54 compute-0 nova_compute[189491]: 2025-12-01 09:17:54.120 189495 DEBUG os_vif [None req-d86596b7-63b7-4f6e-8c34-085e35de5ce5 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:03:b9:7c,bridge_name='br-int',has_traffic_filtering=True,id=213d57d5-9e28-4606-937a-97375a401f82,network=Network(52d15875-2a2e-463a-bc5d-8fa6b8466bff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap213d57d5-9e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  1 09:17:54 compute-0 nova_compute[189491]: 2025-12-01 09:17:54.121 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:17:54 compute-0 nova_compute[189491]: 2025-12-01 09:17:54.122 189495 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:17:54 compute-0 nova_compute[189491]: 2025-12-01 09:17:54.122 189495 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 09:17:54 compute-0 nova_compute[189491]: 2025-12-01 09:17:54.127 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:17:54 compute-0 nova_compute[189491]: 2025-12-01 09:17:54.127 189495 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap213d57d5-9e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:17:54 compute-0 nova_compute[189491]: 2025-12-01 09:17:54.127 189495 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap213d57d5-9e, col_values=(('external_ids', {'iface-id': '213d57d5-9e28-4606-937a-97375a401f82', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:03:b9:7c', 'vm-uuid': '11a8e94c-61e3-4805-b688-e4b9121b30ba'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:17:54 compute-0 NetworkManager[56318]: <info>  [1764580674.1323] manager: (tap213d57d5-9e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/27)
Dec  1 09:17:54 compute-0 nova_compute[189491]: 2025-12-01 09:17:54.134 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  1 09:17:54 compute-0 nova_compute[189491]: 2025-12-01 09:17:54.143 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:17:54 compute-0 nova_compute[189491]: 2025-12-01 09:17:54.146 189495 INFO os_vif [None req-d86596b7-63b7-4f6e-8c34-085e35de5ce5 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:03:b9:7c,bridge_name='br-int',has_traffic_filtering=True,id=213d57d5-9e28-4606-937a-97375a401f82,network=Network(52d15875-2a2e-463a-bc5d-8fa6b8466bff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap213d57d5-9e')#033[00m
Dec  1 09:17:54 compute-0 rsyslogd[236849]: message too long (8192) with configured size 8096, begin of message is: 2025-12-01 09:17:54.117 189495 DEBUG nova.virt.libvirt.vif [None req-d86596b7-63 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec  1 09:17:54 compute-0 nova_compute[189491]: 2025-12-01 09:17:54.278 189495 DEBUG nova.virt.libvirt.driver [None req-d86596b7-63b7-4f6e-8c34-085e35de5ce5 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  1 09:17:54 compute-0 nova_compute[189491]: 2025-12-01 09:17:54.279 189495 DEBUG nova.virt.libvirt.driver [None req-d86596b7-63b7-4f6e-8c34-085e35de5ce5 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  1 09:17:54 compute-0 nova_compute[189491]: 2025-12-01 09:17:54.279 189495 DEBUG nova.virt.libvirt.driver [None req-d86596b7-63b7-4f6e-8c34-085e35de5ce5 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  1 09:17:54 compute-0 nova_compute[189491]: 2025-12-01 09:17:54.279 189495 DEBUG nova.virt.libvirt.driver [None req-d86596b7-63b7-4f6e-8c34-085e35de5ce5 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] No VIF found with MAC fa:16:3e:03:b9:7c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  1 09:17:54 compute-0 nova_compute[189491]: 2025-12-01 09:17:54.280 189495 INFO nova.virt.libvirt.driver [None req-d86596b7-63b7-4f6e-8c34-085e35de5ce5 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 11a8e94c-61e3-4805-b688-e4b9121b30ba] Using config drive#033[00m
Dec  1 09:17:54 compute-0 nova_compute[189491]: 2025-12-01 09:17:54.610 189495 INFO nova.virt.libvirt.driver [None req-d86596b7-63b7-4f6e-8c34-085e35de5ce5 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 11a8e94c-61e3-4805-b688-e4b9121b30ba] Creating config drive at /var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.config#033[00m
Dec  1 09:17:54 compute-0 nova_compute[189491]: 2025-12-01 09:17:54.624 189495 DEBUG oslo_concurrency.processutils [None req-d86596b7-63b7-4f6e-8c34-085e35de5ce5 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkw7ijg_j execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:17:54 compute-0 nova_compute[189491]: 2025-12-01 09:17:54.775 189495 DEBUG oslo_concurrency.processutils [None req-d86596b7-63b7-4f6e-8c34-085e35de5ce5 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkw7ijg_j" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:17:54 compute-0 kernel: tap213d57d5-9e: entered promiscuous mode
Dec  1 09:17:54 compute-0 NetworkManager[56318]: <info>  [1764580674.8718] manager: (tap213d57d5-9e): new Tun device (/org/freedesktop/NetworkManager/Devices/28)
Dec  1 09:17:54 compute-0 nova_compute[189491]: 2025-12-01 09:17:54.870 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:17:54 compute-0 ovn_controller[97794]: 2025-12-01T09:17:54Z|00035|binding|INFO|Claiming lport 213d57d5-9e28-4606-937a-97375a401f82 for this chassis.
Dec  1 09:17:54 compute-0 ovn_controller[97794]: 2025-12-01T09:17:54Z|00036|binding|INFO|213d57d5-9e28-4606-937a-97375a401f82: Claiming fa:16:3e:03:b9:7c 192.168.0.178
Dec  1 09:17:54 compute-0 ovn_controller[97794]: 2025-12-01T09:17:54Z|00037|binding|INFO|Setting lport 213d57d5-9e28-4606-937a-97375a401f82 ovn-installed in OVS
Dec  1 09:17:54 compute-0 nova_compute[189491]: 2025-12-01 09:17:54.897 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:17:54 compute-0 systemd-udevd[240512]: Network interface NamePolicy= disabled on kernel command line.
Dec  1 09:17:54 compute-0 NetworkManager[56318]: <info>  [1764580674.9335] device (tap213d57d5-9e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  1 09:17:54 compute-0 systemd-machined[155812]: New machine qemu-2-instance-00000002.
Dec  1 09:17:54 compute-0 NetworkManager[56318]: <info>  [1764580674.9422] device (tap213d57d5-9e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  1 09:17:54 compute-0 systemd[1]: Started Virtual Machine qemu-2-instance-00000002.
Dec  1 09:17:55 compute-0 ovn_controller[97794]: 2025-12-01T09:17:55Z|00038|binding|INFO|Setting lport 213d57d5-9e28-4606-937a-97375a401f82 up in Southbound
Dec  1 09:17:55 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:17:55.023 106659 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:03:b9:7c 192.168.0.178'], port_security=['fa:16:3e:03:b9:7c 192.168.0.178'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-vdfkxa75cfa3-6buvcyjxf2ua-hietjgfclklq-port-cj54npjlvy2j', 'neutron:cidrs': '192.168.0.178/24', 'neutron:device_id': '11a8e94c-61e3-4805-b688-e4b9121b30ba', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-52d15875-2a2e-463a-bc5d-8fa6b8466bff', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-vdfkxa75cfa3-6buvcyjxf2ua-hietjgfclklq-port-cj54npjlvy2j', 'neutron:project_id': 'fac95b8a995a4174bfa966a8d9d9aa01', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a5a5e6d4-6211-447f-b3f6-e2120ff69d87', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.238'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=260b7b6c-4405-41e2-9dc8-1595161adaf8, chassis=[<ovs.db.idl.Row object at 0x7ff7f12c3670>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff7f12c3670>], logical_port=213d57d5-9e28-4606-937a-97375a401f82) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 09:17:55 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:17:55.024 106659 INFO neutron.agent.ovn.metadata.agent [-] Port 213d57d5-9e28-4606-937a-97375a401f82 in datapath 52d15875-2a2e-463a-bc5d-8fa6b8466bff bound to our chassis#033[00m
Dec  1 09:17:55 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:17:55.026 106659 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 52d15875-2a2e-463a-bc5d-8fa6b8466bff#033[00m
Dec  1 09:17:55 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:17:55.044 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[ecac04de-ca42-460e-883d-eaea7128c279]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:17:55 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:17:55.081 239843 DEBUG oslo.privsep.daemon [-] privsep: reply[ebefc114-535d-4327-a412-30546203af23]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:17:55 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:17:55.084 239843 DEBUG oslo.privsep.daemon [-] privsep: reply[25a338a5-7c8e-4a0e-a7e6-4bd483702b7c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:17:55 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:17:55.115 239843 DEBUG oslo.privsep.daemon [-] privsep: reply[0c1b3d4c-ac91-4136-b37c-35cd1646fca7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:17:55 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:17:55.136 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[02481a6e-6a1e-44d5-932a-450afbc9ad2a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap52d15875-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d0:8c:a9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 383928, 'reachable_time': 31799, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 240528, 'error': None, 'target': 'ovnmeta-52d15875-2a2e-463a-bc5d-8fa6b8466bff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:17:55 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:17:55.157 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[c643eb4c-b9f6-49b4-bb4f-27138bce7aeb]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap52d15875-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 383943, 'tstamp': 383943}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 240529, 'error': None, 'target': 'ovnmeta-52d15875-2a2e-463a-bc5d-8fa6b8466bff', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap52d15875-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 383945, 'tstamp': 383945}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 240529, 'error': None, 'target': 'ovnmeta-52d15875-2a2e-463a-bc5d-8fa6b8466bff', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:17:55 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:17:55.160 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap52d15875-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:17:55 compute-0 nova_compute[189491]: 2025-12-01 09:17:55.163 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:17:55 compute-0 nova_compute[189491]: 2025-12-01 09:17:55.164 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:17:55 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:17:55.165 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap52d15875-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:17:55 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:17:55.166 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 09:17:55 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:17:55.167 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap52d15875-20, col_values=(('external_ids', {'iface-id': 'dbcd2eb8-9722-4ebb-b254-d57f599617d1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:17:55 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:17:55.168 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 09:17:55 compute-0 nova_compute[189491]: 2025-12-01 09:17:55.433 189495 DEBUG nova.virt.driver [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] Emitting event <LifecycleEvent: 1764580675.4325008, 11a8e94c-61e3-4805-b688-e4b9121b30ba => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 09:17:55 compute-0 nova_compute[189491]: 2025-12-01 09:17:55.434 189495 INFO nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: 11a8e94c-61e3-4805-b688-e4b9121b30ba] VM Started (Lifecycle Event)#033[00m
Dec  1 09:17:55 compute-0 nova_compute[189491]: 2025-12-01 09:17:55.495 189495 DEBUG nova.compute.manager [req-59fed13e-1f3e-49b0-8a98-edc7cbe78364 req-935fa2a5-a9eb-46be-ae3c-ab6b8c1a1906 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 11a8e94c-61e3-4805-b688-e4b9121b30ba] Received event network-vif-plugged-213d57d5-9e28-4606-937a-97375a401f82 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 09:17:55 compute-0 nova_compute[189491]: 2025-12-01 09:17:55.495 189495 DEBUG oslo_concurrency.lockutils [req-59fed13e-1f3e-49b0-8a98-edc7cbe78364 req-935fa2a5-a9eb-46be-ae3c-ab6b8c1a1906 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Acquiring lock "11a8e94c-61e3-4805-b688-e4b9121b30ba-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:17:55 compute-0 nova_compute[189491]: 2025-12-01 09:17:55.495 189495 DEBUG oslo_concurrency.lockutils [req-59fed13e-1f3e-49b0-8a98-edc7cbe78364 req-935fa2a5-a9eb-46be-ae3c-ab6b8c1a1906 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Lock "11a8e94c-61e3-4805-b688-e4b9121b30ba-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:17:55 compute-0 nova_compute[189491]: 2025-12-01 09:17:55.495 189495 DEBUG oslo_concurrency.lockutils [req-59fed13e-1f3e-49b0-8a98-edc7cbe78364 req-935fa2a5-a9eb-46be-ae3c-ab6b8c1a1906 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Lock "11a8e94c-61e3-4805-b688-e4b9121b30ba-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:17:55 compute-0 nova_compute[189491]: 2025-12-01 09:17:55.496 189495 DEBUG nova.compute.manager [req-59fed13e-1f3e-49b0-8a98-edc7cbe78364 req-935fa2a5-a9eb-46be-ae3c-ab6b8c1a1906 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 11a8e94c-61e3-4805-b688-e4b9121b30ba] Processing event network-vif-plugged-213d57d5-9e28-4606-937a-97375a401f82 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  1 09:17:55 compute-0 nova_compute[189491]: 2025-12-01 09:17:55.496 189495 DEBUG nova.compute.manager [None req-d86596b7-63b7-4f6e-8c34-085e35de5ce5 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 11a8e94c-61e3-4805-b688-e4b9121b30ba] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  1 09:17:55 compute-0 nova_compute[189491]: 2025-12-01 09:17:55.502 189495 DEBUG nova.virt.libvirt.driver [None req-d86596b7-63b7-4f6e-8c34-085e35de5ce5 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 11a8e94c-61e3-4805-b688-e4b9121b30ba] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  1 09:17:55 compute-0 nova_compute[189491]: 2025-12-01 09:17:55.504 189495 DEBUG nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: 11a8e94c-61e3-4805-b688-e4b9121b30ba] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 09:17:55 compute-0 nova_compute[189491]: 2025-12-01 09:17:55.510 189495 INFO nova.virt.libvirt.driver [-] [instance: 11a8e94c-61e3-4805-b688-e4b9121b30ba] Instance spawned successfully.#033[00m
Dec  1 09:17:55 compute-0 nova_compute[189491]: 2025-12-01 09:17:55.511 189495 DEBUG nova.virt.libvirt.driver [None req-d86596b7-63b7-4f6e-8c34-085e35de5ce5 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 11a8e94c-61e3-4805-b688-e4b9121b30ba] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  1 09:17:55 compute-0 nova_compute[189491]: 2025-12-01 09:17:55.514 189495 DEBUG nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: 11a8e94c-61e3-4805-b688-e4b9121b30ba] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  1 09:17:55 compute-0 nova_compute[189491]: 2025-12-01 09:17:55.692 189495 INFO nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: 11a8e94c-61e3-4805-b688-e4b9121b30ba] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  1 09:17:55 compute-0 nova_compute[189491]: 2025-12-01 09:17:55.693 189495 DEBUG nova.virt.driver [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] Emitting event <LifecycleEvent: 1764580675.4326413, 11a8e94c-61e3-4805-b688-e4b9121b30ba => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 09:17:55 compute-0 nova_compute[189491]: 2025-12-01 09:17:55.694 189495 INFO nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: 11a8e94c-61e3-4805-b688-e4b9121b30ba] VM Paused (Lifecycle Event)#033[00m
Dec  1 09:17:55 compute-0 nova_compute[189491]: 2025-12-01 09:17:55.712 189495 DEBUG nova.virt.libvirt.driver [None req-d86596b7-63b7-4f6e-8c34-085e35de5ce5 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 11a8e94c-61e3-4805-b688-e4b9121b30ba] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 09:17:55 compute-0 nova_compute[189491]: 2025-12-01 09:17:55.713 189495 DEBUG nova.virt.libvirt.driver [None req-d86596b7-63b7-4f6e-8c34-085e35de5ce5 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 11a8e94c-61e3-4805-b688-e4b9121b30ba] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 09:17:55 compute-0 nova_compute[189491]: 2025-12-01 09:17:55.714 189495 DEBUG nova.virt.libvirt.driver [None req-d86596b7-63b7-4f6e-8c34-085e35de5ce5 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 11a8e94c-61e3-4805-b688-e4b9121b30ba] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 09:17:55 compute-0 nova_compute[189491]: 2025-12-01 09:17:55.715 189495 DEBUG nova.virt.libvirt.driver [None req-d86596b7-63b7-4f6e-8c34-085e35de5ce5 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 11a8e94c-61e3-4805-b688-e4b9121b30ba] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 09:17:55 compute-0 nova_compute[189491]: 2025-12-01 09:17:55.716 189495 DEBUG nova.virt.libvirt.driver [None req-d86596b7-63b7-4f6e-8c34-085e35de5ce5 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 11a8e94c-61e3-4805-b688-e4b9121b30ba] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 09:17:55 compute-0 nova_compute[189491]: 2025-12-01 09:17:55.717 189495 DEBUG nova.virt.libvirt.driver [None req-d86596b7-63b7-4f6e-8c34-085e35de5ce5 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 11a8e94c-61e3-4805-b688-e4b9121b30ba] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 09:17:55 compute-0 nova_compute[189491]: 2025-12-01 09:17:55.738 189495 DEBUG nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: 11a8e94c-61e3-4805-b688-e4b9121b30ba] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 09:17:55 compute-0 nova_compute[189491]: 2025-12-01 09:17:55.749 189495 DEBUG nova.virt.driver [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] Emitting event <LifecycleEvent: 1764580675.5019069, 11a8e94c-61e3-4805-b688-e4b9121b30ba => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 09:17:55 compute-0 nova_compute[189491]: 2025-12-01 09:17:55.750 189495 INFO nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: 11a8e94c-61e3-4805-b688-e4b9121b30ba] VM Resumed (Lifecycle Event)#033[00m
Dec  1 09:17:55 compute-0 nova_compute[189491]: 2025-12-01 09:17:55.817 189495 DEBUG nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: 11a8e94c-61e3-4805-b688-e4b9121b30ba] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 09:17:55 compute-0 nova_compute[189491]: 2025-12-01 09:17:55.823 189495 DEBUG nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: 11a8e94c-61e3-4805-b688-e4b9121b30ba] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  1 09:17:55 compute-0 nova_compute[189491]: 2025-12-01 09:17:55.849 189495 INFO nova.compute.manager [None req-d86596b7-63b7-4f6e-8c34-085e35de5ce5 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 11a8e94c-61e3-4805-b688-e4b9121b30ba] Took 6.22 seconds to spawn the instance on the hypervisor.#033[00m
Dec  1 09:17:55 compute-0 nova_compute[189491]: 2025-12-01 09:17:55.849 189495 DEBUG nova.compute.manager [None req-d86596b7-63b7-4f6e-8c34-085e35de5ce5 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 11a8e94c-61e3-4805-b688-e4b9121b30ba] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 09:17:55 compute-0 nova_compute[189491]: 2025-12-01 09:17:55.851 189495 INFO nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: 11a8e94c-61e3-4805-b688-e4b9121b30ba] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  1 09:17:55 compute-0 nova_compute[189491]: 2025-12-01 09:17:55.997 189495 INFO nova.compute.manager [None req-d86596b7-63b7-4f6e-8c34-085e35de5ce5 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 11a8e94c-61e3-4805-b688-e4b9121b30ba] Took 7.75 seconds to build instance.#033[00m
Dec  1 09:17:56 compute-0 nova_compute[189491]: 2025-12-01 09:17:56.018 189495 DEBUG oslo_concurrency.lockutils [None req-d86596b7-63b7-4f6e-8c34-085e35de5ce5 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lock "11a8e94c-61e3-4805-b688-e4b9121b30ba" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.880s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:17:57 compute-0 nova_compute[189491]: 2025-12-01 09:17:57.139 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:17:57 compute-0 nova_compute[189491]: 2025-12-01 09:17:57.614 189495 DEBUG nova.compute.manager [req-a8479a47-a4f2-4222-a8a5-73c80db7db3a req-5806af8a-86db-4cab-b201-8d7fbda86895 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 11a8e94c-61e3-4805-b688-e4b9121b30ba] Received event network-vif-plugged-213d57d5-9e28-4606-937a-97375a401f82 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 09:17:57 compute-0 nova_compute[189491]: 2025-12-01 09:17:57.616 189495 DEBUG oslo_concurrency.lockutils [req-a8479a47-a4f2-4222-a8a5-73c80db7db3a req-5806af8a-86db-4cab-b201-8d7fbda86895 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Acquiring lock "11a8e94c-61e3-4805-b688-e4b9121b30ba-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:17:57 compute-0 nova_compute[189491]: 2025-12-01 09:17:57.616 189495 DEBUG oslo_concurrency.lockutils [req-a8479a47-a4f2-4222-a8a5-73c80db7db3a req-5806af8a-86db-4cab-b201-8d7fbda86895 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Lock "11a8e94c-61e3-4805-b688-e4b9121b30ba-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:17:57 compute-0 nova_compute[189491]: 2025-12-01 09:17:57.616 189495 DEBUG oslo_concurrency.lockutils [req-a8479a47-a4f2-4222-a8a5-73c80db7db3a req-5806af8a-86db-4cab-b201-8d7fbda86895 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Lock "11a8e94c-61e3-4805-b688-e4b9121b30ba-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:17:57 compute-0 nova_compute[189491]: 2025-12-01 09:17:57.617 189495 DEBUG nova.compute.manager [req-a8479a47-a4f2-4222-a8a5-73c80db7db3a req-5806af8a-86db-4cab-b201-8d7fbda86895 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 11a8e94c-61e3-4805-b688-e4b9121b30ba] No waiting events found dispatching network-vif-plugged-213d57d5-9e28-4606-937a-97375a401f82 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 09:17:57 compute-0 nova_compute[189491]: 2025-12-01 09:17:57.617 189495 WARNING nova.compute.manager [req-a8479a47-a4f2-4222-a8a5-73c80db7db3a req-5806af8a-86db-4cab-b201-8d7fbda86895 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 11a8e94c-61e3-4805-b688-e4b9121b30ba] Received unexpected event network-vif-plugged-213d57d5-9e28-4606-937a-97375a401f82 for instance with vm_state active and task_state None.#033[00m
Dec  1 09:17:59 compute-0 nova_compute[189491]: 2025-12-01 09:17:59.134 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:17:59 compute-0 podman[203700]: time="2025-12-01T09:17:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 09:17:59 compute-0 podman[203700]: @ - - [01/Dec/2025:09:17:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Dec  1 09:17:59 compute-0 podman[203700]: @ - - [01/Dec/2025:09:17:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4796 "" "Go-http-client/1.1"
Dec  1 09:18:01 compute-0 openstack_network_exporter[205866]: ERROR   09:18:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:18:01 compute-0 openstack_network_exporter[205866]: ERROR   09:18:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:18:01 compute-0 openstack_network_exporter[205866]: ERROR   09:18:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 09:18:01 compute-0 openstack_network_exporter[205866]: ERROR   09:18:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 09:18:01 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:18:01 compute-0 openstack_network_exporter[205866]: ERROR   09:18:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 09:18:01 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:18:02 compute-0 nova_compute[189491]: 2025-12-01 09:18:02.142 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:18:02 compute-0 podman[240537]: 2025-12-01 09:18:02.727915919 +0000 UTC m=+0.088717033 container health_status 6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  1 09:18:02 compute-0 podman[240538]: 2025-12-01 09:18:02.75279693 +0000 UTC m=+0.111340240 container health_status ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Dec  1 09:18:04 compute-0 nova_compute[189491]: 2025-12-01 09:18:04.139 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:18:07 compute-0 nova_compute[189491]: 2025-12-01 09:18:07.146 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:18:08 compute-0 podman[240575]: 2025-12-01 09:18:08.791424393 +0000 UTC m=+0.150756032 container health_status e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 09:18:09 compute-0 nova_compute[189491]: 2025-12-01 09:18:09.142 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:18:12 compute-0 nova_compute[189491]: 2025-12-01 09:18:12.148 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:18:12 compute-0 podman[240595]: 2025-12-01 09:18:12.739570484 +0000 UTC m=+0.109025174 container health_status dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  1 09:18:12 compute-0 podman[240596]: 2025-12-01 09:18:12.780192735 +0000 UTC m=+0.139483080 container health_status f2d8639519b361376780abb83cb5a5e341c4f412720fb5852b2a4d261a69c359 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, io.buildah.version=1.29.0, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, build-date=2024-09-18T21:23:30, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, name=ubi9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, container_name=kepler, release-0.7.12=, managed_by=edpm_ansible, version=9.4, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm)
Dec  1 09:18:14 compute-0 nova_compute[189491]: 2025-12-01 09:18:14.146 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:18:15 compute-0 nova_compute[189491]: 2025-12-01 09:18:15.897 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:18:16 compute-0 nova_compute[189491]: 2025-12-01 09:18:16.715 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:18:16 compute-0 nova_compute[189491]: 2025-12-01 09:18:16.716 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 09:18:16 compute-0 nova_compute[189491]: 2025-12-01 09:18:16.717 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 09:18:17 compute-0 nova_compute[189491]: 2025-12-01 09:18:17.150 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:18:17 compute-0 nova_compute[189491]: 2025-12-01 09:18:17.186 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "refresh_cache-7ed22ffd-011d-48d7-962a-8606e471a59e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 09:18:17 compute-0 nova_compute[189491]: 2025-12-01 09:18:17.186 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquired lock "refresh_cache-7ed22ffd-011d-48d7-962a-8606e471a59e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 09:18:17 compute-0 nova_compute[189491]: 2025-12-01 09:18:17.186 189495 DEBUG nova.network.neutron [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] [instance: 7ed22ffd-011d-48d7-962a-8606e471a59e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  1 09:18:17 compute-0 nova_compute[189491]: 2025-12-01 09:18:17.187 189495 DEBUG nova.objects.instance [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 7ed22ffd-011d-48d7-962a-8606e471a59e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 09:18:17 compute-0 podman[240636]: 2025-12-01 09:18:17.775295701 +0000 UTC m=+0.128145328 container health_status 110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, distribution-scope=public, io.buildah.version=1.33.7, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., version=9.6, container_name=openstack_network_exporter, vcs-type=git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64)
Dec  1 09:18:17 compute-0 podman[240637]: 2025-12-01 09:18:17.796414988 +0000 UTC m=+0.144716656 container health_status f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  1 09:18:19 compute-0 nova_compute[189491]: 2025-12-01 09:18:19.150 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:18:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:19.779 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  1 09:18:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:19.780 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  1 09:18:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:19.780 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b020>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:18:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:19.780 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7ff84c98b0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:18:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:19.781 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84dc55100>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:18:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:19.781 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:18:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:19.781 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:18:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:19.782 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:18:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:19.782 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84ca1c260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:18:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:19.782 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:18:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:19.782 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:18:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:19.782 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84ff01af0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:18:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:19.782 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bb30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:18:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:19.782 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:18:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:19.782 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bb60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:18:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:19.783 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:18:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:19.783 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bbc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:18:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:19.783 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:18:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:19.783 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:18:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:19.783 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:18:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:19.783 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:18:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:19.783 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b5f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:18:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:19.783 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:18:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:19.783 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98be60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:18:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:19.784 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84f216690>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:18:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:19.784 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c9896d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:18:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:19.784 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:18:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:19.784 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98a720>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:18:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:19.785 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:18:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:19.788 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 7ed22ffd-011d-48d7-962a-8606e471a59e from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Dec  1 09:18:20 compute-0 nova_compute[189491]: 2025-12-01 09:18:20.185 189495 DEBUG nova.network.neutron [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] [instance: 7ed22ffd-011d-48d7-962a-8606e471a59e] Updating instance_info_cache with network_info: [{"id": "1632735e-15c5-4d6b-a450-baa001b88ac2", "address": "fa:16:3e:d4:bd:b4", "network": {"id": "52d15875-2a2e-463a-bc5d-8fa6b8466bff", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.55", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fac95b8a995a4174bfa966a8d9d9aa01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1632735e-15", "ovs_interfaceid": "1632735e-15c5-4d6b-a450-baa001b88ac2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 09:18:20 compute-0 nova_compute[189491]: 2025-12-01 09:18:20.204 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Releasing lock "refresh_cache-7ed22ffd-011d-48d7-962a-8606e471a59e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 09:18:20 compute-0 nova_compute[189491]: 2025-12-01 09:18:20.205 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] [instance: 7ed22ffd-011d-48d7-962a-8606e471a59e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  1 09:18:20 compute-0 nova_compute[189491]: 2025-12-01 09:18:20.205 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:18:20 compute-0 nova_compute[189491]: 2025-12-01 09:18:20.206 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:18:20 compute-0 nova_compute[189491]: 2025-12-01 09:18:20.206 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:18:20 compute-0 nova_compute[189491]: 2025-12-01 09:18:20.206 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:18:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:20.209 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/7ed22ffd-011d-48d7-962a-8606e471a59e -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}5b15b15c247f410e52837a95689cb091041b96c474d34a98b1d5f06140c01501" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Dec  1 09:18:20 compute-0 nova_compute[189491]: 2025-12-01 09:18:20.265 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:18:20 compute-0 nova_compute[189491]: 2025-12-01 09:18:20.266 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:18:20 compute-0 nova_compute[189491]: 2025-12-01 09:18:20.266 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:18:20 compute-0 nova_compute[189491]: 2025-12-01 09:18:20.267 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 09:18:20 compute-0 nova_compute[189491]: 2025-12-01 09:18:20.378 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:18:20 compute-0 nova_compute[189491]: 2025-12-01 09:18:20.464 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:18:20 compute-0 nova_compute[189491]: 2025-12-01 09:18:20.466 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:18:20 compute-0 nova_compute[189491]: 2025-12-01 09:18:20.565 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:18:20 compute-0 nova_compute[189491]: 2025-12-01 09:18:20.567 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:18:20 compute-0 nova_compute[189491]: 2025-12-01 09:18:20.640 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk.eph0 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:18:20 compute-0 nova_compute[189491]: 2025-12-01 09:18:20.642 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:18:20 compute-0 nova_compute[189491]: 2025-12-01 09:18:20.725 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk.eph0 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:18:20 compute-0 nova_compute[189491]: 2025-12-01 09:18:20.737 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:18:20 compute-0 nova_compute[189491]: 2025-12-01 09:18:20.817 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:18:20 compute-0 nova_compute[189491]: 2025-12-01 09:18:20.819 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:18:20 compute-0 nova_compute[189491]: 2025-12-01 09:18:20.878 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:18:20 compute-0 nova_compute[189491]: 2025-12-01 09:18:20.880 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:18:20 compute-0 nova_compute[189491]: 2025-12-01 09:18:20.943 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.eph0 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:18:20 compute-0 nova_compute[189491]: 2025-12-01 09:18:20.944 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:18:21 compute-0 nova_compute[189491]: 2025-12-01 09:18:21.013 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.eph0 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:18:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:21.109 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1849 Content-Type: application/json Date: Mon, 01 Dec 2025 09:18:20 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-3269659e-eb05-4819-a017-56657e3b64d5 x-openstack-request-id: req-3269659e-eb05-4819-a017-56657e3b64d5 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Dec  1 09:18:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:21.109 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "7ed22ffd-011d-48d7-962a-8606e471a59e", "name": "test_0", "status": "ACTIVE", "tenant_id": "fac95b8a995a4174bfa966a8d9d9aa01", "user_id": "962a55152ff34fdda5eae1f8aee3a7a9", "metadata": {}, "hostId": "8e7812466a0145cddd68754a1627db4b56b0a23f372c34432aca44f1", "image": {"id": "304c689d-2799-45ae-8166-517d5fd107b2", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/304c689d-2799-45ae-8166-517d5fd107b2"}]}, "flavor": {"id": "719a52fe-7f4b-48c0-b9dc-6a91d4ec600c", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/719a52fe-7f4b-48c0-b9dc-6a91d4ec600c"}]}, "created": "2025-12-01T09:16:25Z", "updated": "2025-12-01T09:16:36Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.55", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:d4:bd:b4"}, {"version": 4, "addr": "192.168.122.225", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:d4:bd:b4"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/7ed22ffd-011d-48d7-962a-8606e471a59e"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/7ed22ffd-011d-48d7-962a-8606e471a59e"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-12-01T09:16:36.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000001", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Dec  1 09:18:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:21.109 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/7ed22ffd-011d-48d7-962a-8606e471a59e used request id req-3269659e-eb05-4819-a017-56657e3b64d5 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Dec  1 09:18:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:21.110 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '7ed22ffd-011d-48d7-962a-8606e471a59e', 'name': 'test_0', 'flavor': {'id': '719a52fe-7f4b-48c0-b9dc-6a91d4ec600c', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '304c689d-2799-45ae-8166-517d5fd107b2'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'fac95b8a995a4174bfa966a8d9d9aa01', 'user_id': '962a55152ff34fdda5eae1f8aee3a7a9', 'hostId': '8e7812466a0145cddd68754a1627db4b56b0a23f372c34432aca44f1', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  1 09:18:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:21.113 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 11a8e94c-61e3-4805-b688-e4b9121b30ba from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Dec  1 09:18:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:21.114 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/11a8e94c-61e3-4805-b688-e4b9121b30ba -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}5b15b15c247f410e52837a95689cb091041b96c474d34a98b1d5f06140c01501" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Dec  1 09:18:21 compute-0 nova_compute[189491]: 2025-12-01 09:18:21.437 189495 WARNING nova.virt.libvirt.driver [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 09:18:21 compute-0 nova_compute[189491]: 2025-12-01 09:18:21.439 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5135MB free_disk=72.38729476928711GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 09:18:21 compute-0 nova_compute[189491]: 2025-12-01 09:18:21.439 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:18:21 compute-0 nova_compute[189491]: 2025-12-01 09:18:21.440 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:18:21 compute-0 nova_compute[189491]: 2025-12-01 09:18:21.609 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Instance 7ed22ffd-011d-48d7-962a-8606e471a59e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 09:18:21 compute-0 nova_compute[189491]: 2025-12-01 09:18:21.610 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Instance 11a8e94c-61e3-4805-b688-e4b9121b30ba actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 09:18:21 compute-0 nova_compute[189491]: 2025-12-01 09:18:21.611 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 09:18:21 compute-0 nova_compute[189491]: 2025-12-01 09:18:21.612 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 09:18:21 compute-0 podman[240700]: 2025-12-01 09:18:21.690629221 +0000 UTC m=+0.062044482 container health_status 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible)
Dec  1 09:18:21 compute-0 nova_compute[189491]: 2025-12-01 09:18:21.791 189495 DEBUG nova.compute.provider_tree [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Inventory has not changed in ProviderTree for provider: 143c7fe7-af1f-477a-978c-6a994d785d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 09:18:21 compute-0 podman[240701]: 2025-12-01 09:18:21.808230436 +0000 UTC m=+0.163334515 container health_status 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller)
Dec  1 09:18:21 compute-0 nova_compute[189491]: 2025-12-01 09:18:21.844 189495 DEBUG nova.scheduler.client.report [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Inventory has not changed for provider 143c7fe7-af1f-477a-978c-6a994d785d98 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 09:18:21 compute-0 nova_compute[189491]: 2025-12-01 09:18:21.971 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 09:18:21 compute-0 nova_compute[189491]: 2025-12-01 09:18:21.972 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.533s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:18:22 compute-0 nova_compute[189491]: 2025-12-01 09:18:22.152 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.182 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1960 Content-Type: application/json Date: Mon, 01 Dec 2025 09:18:21 GMT Keep-Alive: timeout=5, max=99 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-ca95b831-4607-4dfa-abc4-e0e446e77798 x-openstack-request-id: req-ca95b831-4607-4dfa-abc4-e0e446e77798 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.182 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "11a8e94c-61e3-4805-b688-e4b9121b30ba", "name": "vn-a75cfa3-6buvcyjxf2ua-hietjgfclklq-vnf-3mwygpaab4vh", "status": "ACTIVE", "tenant_id": "fac95b8a995a4174bfa966a8d9d9aa01", "user_id": "962a55152ff34fdda5eae1f8aee3a7a9", "metadata": {"metering.server_group": "1555a697-b0aa-4429-98e7-26e6671e228d"}, "hostId": "8e7812466a0145cddd68754a1627db4b56b0a23f372c34432aca44f1", "image": {"id": "304c689d-2799-45ae-8166-517d5fd107b2", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/304c689d-2799-45ae-8166-517d5fd107b2"}]}, "flavor": {"id": "719a52fe-7f4b-48c0-b9dc-6a91d4ec600c", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/719a52fe-7f4b-48c0-b9dc-6a91d4ec600c"}]}, "created": "2025-12-01T09:17:43Z", "updated": "2025-12-01T09:17:55Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.178", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:03:b9:7c"}, {"version": 4, "addr": "192.168.122.238", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:03:b9:7c"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/11a8e94c-61e3-4805-b688-e4b9121b30ba"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/11a8e94c-61e3-4805-b688-e4b9121b30ba"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-12-01T09:17:55.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000002", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.182 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/11a8e94c-61e3-4805-b688-e4b9121b30ba used request id req-ca95b831-4607-4dfa-abc4-e0e446e77798 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.183 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '11a8e94c-61e3-4805-b688-e4b9121b30ba', 'name': 'vn-a75cfa3-6buvcyjxf2ua-hietjgfclklq-vnf-3mwygpaab4vh', 'flavor': {'id': '719a52fe-7f4b-48c0-b9dc-6a91d4ec600c', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '304c689d-2799-45ae-8166-517d5fd107b2'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'fac95b8a995a4174bfa966a8d9d9aa01', 'user_id': '962a55152ff34fdda5eae1f8aee3a7a9', 'hostId': '8e7812466a0145cddd68754a1627db4b56b0a23f372c34432aca44f1', 'status': 'active', 'metadata': {'metering.server_group': '1555a697-b0aa-4429-98e7-26e6671e228d'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.184 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.184 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b020>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.184 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b020>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.185 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.186 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-01T09:18:22.184588) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.277 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.279 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.279 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.401 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.read.bytes volume: 18348032 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.402 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.read.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.402 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.read.bytes volume: 2048 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.403 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.403 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7ff8501e1d00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.403 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.403 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84dc55100>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.404 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84dc55100>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.404 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.405 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-01T09:18:22.404149) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.438 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.allocation volume: 21831680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.439 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.439 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.468 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.allocation volume: 204800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.468 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.allocation volume: 204800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.468 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.469 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.469 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7ff84c98b110>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.469 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.469 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b140>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.469 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b140>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.469 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.469 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.read.latency volume: 476643826 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.470 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.read.latency volume: 112985408 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.470 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.read.latency volume: 87581444 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.470 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.read.latency volume: 333560290 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.471 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.read.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.471 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.read.latency volume: 1179767 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.471 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.471 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7ff84c98b1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.472 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.472 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.472 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.472 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.472 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.473 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.473 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-01T09:18:22.469641) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.473 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-01T09:18:22.472378) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.473 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.473 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.usage volume: 196624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.473 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.usage volume: 196624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.474 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.474 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.474 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7ff84c98b200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.474 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.474 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.474 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.475 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.475 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.475 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.475 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.475 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.476 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.476 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.476 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.477 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7ff84ca1c230>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.477 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-01T09:18:22.475020) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.477 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.477 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84ca1c260>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.477 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84ca1c260>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.477 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.478 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-01T09:18:22.477700) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:18:22 compute-0 nova_compute[189491]: 2025-12-01 09:18:22.482 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:18:22 compute-0 nova_compute[189491]: 2025-12-01 09:18:22.482 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:18:22 compute-0 nova_compute[189491]: 2025-12-01 09:18:22.483 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:18:22 compute-0 nova_compute[189491]: 2025-12-01 09:18:22.483 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.508 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.535 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.535 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.535 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7ff84c98b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.536 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.536 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.536 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.536 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.536 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.write.latency volume: 1809136387 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.537 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.write.latency volume: 11785635 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.536 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-01T09:18:22.536404) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.537 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.537 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.537 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.538 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.538 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.538 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7ff84c98b2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.538 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.538 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.538 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.539 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.539 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.write.requests volume: 230 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.539 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.539 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.540 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.540 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.540 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.541 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.541 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7ff84c98b620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.541 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.541 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84ff01af0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.541 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84ff01af0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.542 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.541 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-01T09:18:22.539087) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.542 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-01T09:18:22.541933) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.559 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 7ed22ffd-011d-48d7-962a-8606e471a59e / tap1632735e-15 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.559 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/network.incoming.bytes volume: 1968 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.563 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 11a8e94c-61e3-4805-b688-e4b9121b30ba / tap213d57d5-9e inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.564 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/network.incoming.bytes volume: 90 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.564 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.564 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7ff84c98b680>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.565 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.565 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bb30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.565 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bb30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.565 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.565 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.565 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: test_0>, <NovaLikeServer: vn-a75cfa3-6buvcyjxf2ua-hietjgfclklq-vnf-3mwygpaab4vh>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: test_0>, <NovaLikeServer: vn-a75cfa3-6buvcyjxf2ua-hietjgfclklq-vnf-3mwygpaab4vh>]
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.567 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7ff84c98b320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.567 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-12-01T09:18:22.565301) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.567 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.567 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.567 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.567 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.568 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.568 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7ff84c98b920>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.568 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.568 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bb60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.568 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bb60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.568 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.568 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-01T09:18:22.567673) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.569 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/network.incoming.packets volume: 17 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.569 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-01T09:18:22.568737) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.569 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/network.incoming.packets volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.569 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.569 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7ff84c98b380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.569 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.569 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b3b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.569 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b3b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.569 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.570 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.570 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7ff84c98bb90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.570 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.570 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bbc0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.570 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bbc0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.570 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.570 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.571 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.571 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-01T09:18:22.569882) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.571 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.571 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-01T09:18:22.570749) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.571 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7ff84c98bbf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.571 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.571 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bc20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.571 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bc20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.571 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.572 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.572 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.572 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.572 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7ff84c98bc80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.573 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.573 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bcb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.573 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bcb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.573 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.573 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/network.outgoing.bytes volume: 2132 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.573 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-01T09:18:22.571933) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.573 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/network.outgoing.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.574 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.574 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7ff84c98bd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.574 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.574 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bd40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.574 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bd40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.574 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.574 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.574 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.575 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-01T09:18:22.573410) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.575 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.575 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7ff84c98bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.575 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.575 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-01T09:18:22.574641) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.575 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bdd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.575 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bdd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.575 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.576 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.576 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: test_0>, <NovaLikeServer: vn-a75cfa3-6buvcyjxf2ua-hietjgfclklq-vnf-3mwygpaab4vh>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: test_0>, <NovaLikeServer: vn-a75cfa3-6buvcyjxf2ua-hietjgfclklq-vnf-3mwygpaab4vh>]
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.576 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7ff84c98b5c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.576 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.576 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b5f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.576 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b5f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.576 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.576 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-12-01T09:18:22.575795) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.577 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/memory.usage volume: 48.9453125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.577 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/memory.usage volume: Unavailable _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.577 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-01T09:18:22.576905) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.577 14 WARNING ceilometer.compute.pollsters [-] memory.usage statistic in not available for instance 11a8e94c-61e3-4805-b688-e4b9121b30ba: ceilometer.compute.pollsters.NoVolumeException
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.577 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.577 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7ff84dc55040>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.577 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.577 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b650>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.578 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b650>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.578 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.578 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.578 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.578 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.578 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7ff84c98be30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.578 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.579 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98be60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.579 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98be60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.579 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.579 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/network.outgoing.packets volume: 20 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.579 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/network.outgoing.packets volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.579 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.579 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7ff8503b1b80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.580 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.580 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84f216690>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.580 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84f216690>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.580 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.580 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-01T09:18:22.578073) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.580 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/cpu volume: 28720000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.580 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-01T09:18:22.579192) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.580 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/cpu volume: 26470000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.580 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.581 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7ff84dab3f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.581 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.581 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c9896d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.581 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c9896d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.581 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.581 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.581 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.582 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.582 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.582 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.582 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.583 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.583 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7ff84c98bec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.583 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.583 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bef0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.583 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bef0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.583 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.583 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.583 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-01T09:18:22.580390) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.584 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.584 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-01T09:18:22.581526) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.584 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.584 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7ff84c98b170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.584 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.584 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-01T09:18:22.583721) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.585 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98a720>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.585 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98a720>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.585 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.585 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.585 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.585 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.585 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.read.requests volume: 573 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.586 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.read.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.586 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.read.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.586 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.586 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7ff84c98bf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.587 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.587 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bf80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.587 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bf80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.587 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.587 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-01T09:18:22.585183) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.587 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.587 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-01T09:18:22.587521) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.588 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.588 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.588 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.589 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.589 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.589 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.589 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.589 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.590 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.590 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.590 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.590 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.590 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.590 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.590 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.591 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.591 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.591 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.591 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.591 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.591 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.592 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.592 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.592 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.592 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.592 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.592 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:18:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:18:22.592 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:18:24 compute-0 nova_compute[189491]: 2025-12-01 09:18:24.158 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:18:25 compute-0 ovn_controller[97794]: 2025-12-01T09:18:25Z|00039|memory_trim|INFO|Detected inactivity (last active 30007 ms ago): trimming memory
Dec  1 09:18:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:18:26.502 106659 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:18:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:18:26.503 106659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:18:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:18:26.504 106659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:18:27 compute-0 nova_compute[189491]: 2025-12-01 09:18:27.155 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:18:29 compute-0 nova_compute[189491]: 2025-12-01 09:18:29.163 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:18:29 compute-0 podman[203700]: time="2025-12-01T09:18:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 09:18:29 compute-0 podman[203700]: @ - - [01/Dec/2025:09:18:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Dec  1 09:18:29 compute-0 podman[203700]: @ - - [01/Dec/2025:09:18:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4790 "" "Go-http-client/1.1"
Dec  1 09:18:31 compute-0 openstack_network_exporter[205866]: ERROR   09:18:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:18:31 compute-0 openstack_network_exporter[205866]: ERROR   09:18:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:18:31 compute-0 openstack_network_exporter[205866]: ERROR   09:18:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 09:18:31 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:18:31 compute-0 openstack_network_exporter[205866]: ERROR   09:18:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 09:18:31 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:18:31 compute-0 openstack_network_exporter[205866]: ERROR   09:18:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 09:18:31 compute-0 ovn_controller[97794]: 2025-12-01T09:18:31Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:03:b9:7c 192.168.0.178
Dec  1 09:18:31 compute-0 ovn_controller[97794]: 2025-12-01T09:18:31Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:03:b9:7c 192.168.0.178
Dec  1 09:18:32 compute-0 nova_compute[189491]: 2025-12-01 09:18:32.157 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:18:33 compute-0 podman[240758]: 2025-12-01 09:18:33.781608795 +0000 UTC m=+0.141084390 container health_status 6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  1 09:18:33 compute-0 podman[240759]: 2025-12-01 09:18:33.791621535 +0000 UTC m=+0.143784864 container health_status ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible)
Dec  1 09:18:34 compute-0 nova_compute[189491]: 2025-12-01 09:18:34.168 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:18:37 compute-0 nova_compute[189491]: 2025-12-01 09:18:37.160 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:18:39 compute-0 nova_compute[189491]: 2025-12-01 09:18:39.173 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:18:39 compute-0 podman[240801]: 2025-12-01 09:18:39.785357957 +0000 UTC m=+0.130558337 container health_status e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=edpm)
Dec  1 09:18:42 compute-0 nova_compute[189491]: 2025-12-01 09:18:42.166 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:18:43 compute-0 podman[240818]: 2025-12-01 09:18:43.735449691 +0000 UTC m=+0.085092574 container health_status dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  1 09:18:43 compute-0 podman[240819]: 2025-12-01 09:18:43.749294134 +0000 UTC m=+0.102398161 container health_status f2d8639519b361376780abb83cb5a5e341c4f412720fb5852b2a4d261a69c359 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, name=ubi9, architecture=x86_64, io.openshift.tags=base rhel9, managed_by=edpm_ansible, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., config_id=edpm, io.buildah.version=1.29.0)
Dec  1 09:18:44 compute-0 nova_compute[189491]: 2025-12-01 09:18:44.182 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:18:47 compute-0 nova_compute[189491]: 2025-12-01 09:18:47.170 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:18:48 compute-0 podman[240862]: 2025-12-01 09:18:48.771361753 +0000 UTC m=+0.117801210 container health_status f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team)
Dec  1 09:18:48 compute-0 podman[240861]: 2025-12-01 09:18:48.800958864 +0000 UTC m=+0.150028114 container health_status 110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, io.buildah.version=1.33.7, version=9.6, config_id=edpm, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, architecture=x86_64, build-date=2025-08-20T13:12:41, distribution-scope=public, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec  1 09:18:49 compute-0 nova_compute[189491]: 2025-12-01 09:18:49.189 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:18:52 compute-0 nova_compute[189491]: 2025-12-01 09:18:52.173 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:18:52 compute-0 podman[240899]: 2025-12-01 09:18:52.715777509 +0000 UTC m=+0.087865582 container health_status 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec  1 09:18:52 compute-0 podman[240900]: 2025-12-01 09:18:52.7936543 +0000 UTC m=+0.162238908 container health_status 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Dec  1 09:18:54 compute-0 nova_compute[189491]: 2025-12-01 09:18:54.194 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:18:57 compute-0 nova_compute[189491]: 2025-12-01 09:18:57.177 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:18:59 compute-0 nova_compute[189491]: 2025-12-01 09:18:59.200 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:18:59 compute-0 podman[203700]: time="2025-12-01T09:18:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 09:18:59 compute-0 podman[203700]: @ - - [01/Dec/2025:09:18:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Dec  1 09:18:59 compute-0 podman[203700]: @ - - [01/Dec/2025:09:18:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4792 "" "Go-http-client/1.1"
Dec  1 09:19:01 compute-0 openstack_network_exporter[205866]: ERROR   09:19:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 09:19:01 compute-0 openstack_network_exporter[205866]: ERROR   09:19:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 09:19:01 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:19:01 compute-0 openstack_network_exporter[205866]: ERROR   09:19:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:19:01 compute-0 openstack_network_exporter[205866]: ERROR   09:19:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:19:01 compute-0 openstack_network_exporter[205866]: ERROR   09:19:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 09:19:01 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:19:02 compute-0 nova_compute[189491]: 2025-12-01 09:19:02.181 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:19:04 compute-0 nova_compute[189491]: 2025-12-01 09:19:04.206 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:19:04 compute-0 podman[240943]: 2025-12-01 09:19:04.751342442 +0000 UTC m=+0.114910311 container health_status 6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  1 09:19:04 compute-0 podman[240944]: 2025-12-01 09:19:04.794807516 +0000 UTC m=+0.149975373 container health_status ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.schema-version=1.0)
Dec  1 09:19:07 compute-0 nova_compute[189491]: 2025-12-01 09:19:07.184 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:19:09 compute-0 nova_compute[189491]: 2025-12-01 09:19:09.211 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:19:10 compute-0 podman[240981]: 2025-12-01 09:19:10.759395746 +0000 UTC m=+0.115830013 container health_status e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  1 09:19:12 compute-0 nova_compute[189491]: 2025-12-01 09:19:12.187 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:19:14 compute-0 nova_compute[189491]: 2025-12-01 09:19:14.215 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:19:15 compute-0 podman[241001]: 2025-12-01 09:19:15.357559265 +0000 UTC m=+0.083750033 container health_status f2d8639519b361376780abb83cb5a5e341c4f412720fb5852b2a4d261a69c359 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.tags=base rhel9, architecture=x86_64, build-date=2024-09-18T21:23:30, config_id=edpm, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-type=git, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., release=1214.1726694543, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, io.buildah.version=1.29.0, distribution-scope=public, io.openshift.expose-services=, version=9.4, container_name=kepler)
Dec  1 09:19:15 compute-0 podman[241000]: 2025-12-01 09:19:15.375851184 +0000 UTC m=+0.095097565 container health_status dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  1 09:19:17 compute-0 nova_compute[189491]: 2025-12-01 09:19:17.190 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:19:17 compute-0 nova_compute[189491]: 2025-12-01 09:19:17.715 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:19:17 compute-0 nova_compute[189491]: 2025-12-01 09:19:17.715 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 09:19:18 compute-0 nova_compute[189491]: 2025-12-01 09:19:18.220 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "refresh_cache-11a8e94c-61e3-4805-b688-e4b9121b30ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 09:19:18 compute-0 nova_compute[189491]: 2025-12-01 09:19:18.221 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquired lock "refresh_cache-11a8e94c-61e3-4805-b688-e4b9121b30ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 09:19:18 compute-0 nova_compute[189491]: 2025-12-01 09:19:18.222 189495 DEBUG nova.network.neutron [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] [instance: 11a8e94c-61e3-4805-b688-e4b9121b30ba] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  1 09:19:19 compute-0 nova_compute[189491]: 2025-12-01 09:19:19.219 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:19:19 compute-0 nova_compute[189491]: 2025-12-01 09:19:19.457 189495 DEBUG nova.network.neutron [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] [instance: 11a8e94c-61e3-4805-b688-e4b9121b30ba] Updating instance_info_cache with network_info: [{"id": "213d57d5-9e28-4606-937a-97375a401f82", "address": "fa:16:3e:03:b9:7c", "network": {"id": "52d15875-2a2e-463a-bc5d-8fa6b8466bff", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.178", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fac95b8a995a4174bfa966a8d9d9aa01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap213d57d5-9e", "ovs_interfaceid": "213d57d5-9e28-4606-937a-97375a401f82", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 09:19:19 compute-0 nova_compute[189491]: 2025-12-01 09:19:19.492 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Releasing lock "refresh_cache-11a8e94c-61e3-4805-b688-e4b9121b30ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 09:19:19 compute-0 nova_compute[189491]: 2025-12-01 09:19:19.493 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] [instance: 11a8e94c-61e3-4805-b688-e4b9121b30ba] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  1 09:19:19 compute-0 nova_compute[189491]: 2025-12-01 09:19:19.494 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:19:19 compute-0 podman[241046]: 2025-12-01 09:19:19.701928899 +0000 UTC m=+0.076499199 container health_status f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  1 09:19:19 compute-0 nova_compute[189491]: 2025-12-01 09:19:19.714 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:19:19 compute-0 nova_compute[189491]: 2025-12-01 09:19:19.715 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:19:19 compute-0 nova_compute[189491]: 2025-12-01 09:19:19.715 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:19:19 compute-0 podman[241045]: 2025-12-01 09:19:19.717639786 +0000 UTC m=+0.092977544 container health_status 110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, version=9.6, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, container_name=openstack_network_exporter, vcs-type=git, io.openshift.expose-services=, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, distribution-scope=public, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Dec  1 09:19:19 compute-0 nova_compute[189491]: 2025-12-01 09:19:19.739 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:19:19 compute-0 nova_compute[189491]: 2025-12-01 09:19:19.739 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:19:19 compute-0 nova_compute[189491]: 2025-12-01 09:19:19.739 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:19:19 compute-0 nova_compute[189491]: 2025-12-01 09:19:19.739 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 09:19:19 compute-0 nova_compute[189491]: 2025-12-01 09:19:19.813 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:19:19 compute-0 nova_compute[189491]: 2025-12-01 09:19:19.924 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk --force-share --output=json" returned: 0 in 0.111s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:19:19 compute-0 nova_compute[189491]: 2025-12-01 09:19:19.925 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:19:20 compute-0 nova_compute[189491]: 2025-12-01 09:19:20.013 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:19:20 compute-0 nova_compute[189491]: 2025-12-01 09:19:20.014 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:19:20 compute-0 nova_compute[189491]: 2025-12-01 09:19:20.097 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk.eph0 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:19:20 compute-0 nova_compute[189491]: 2025-12-01 09:19:20.098 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:19:20 compute-0 nova_compute[189491]: 2025-12-01 09:19:20.154 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk.eph0 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:19:20 compute-0 nova_compute[189491]: 2025-12-01 09:19:20.160 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:19:20 compute-0 nova_compute[189491]: 2025-12-01 09:19:20.222 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:19:20 compute-0 nova_compute[189491]: 2025-12-01 09:19:20.224 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:19:20 compute-0 nova_compute[189491]: 2025-12-01 09:19:20.304 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:19:20 compute-0 nova_compute[189491]: 2025-12-01 09:19:20.305 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:19:20 compute-0 nova_compute[189491]: 2025-12-01 09:19:20.401 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.eph0 --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:19:20 compute-0 nova_compute[189491]: 2025-12-01 09:19:20.403 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:19:20 compute-0 nova_compute[189491]: 2025-12-01 09:19:20.501 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.eph0 --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:19:20 compute-0 nova_compute[189491]: 2025-12-01 09:19:20.915 189495 WARNING nova.virt.libvirt.driver [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 09:19:20 compute-0 nova_compute[189491]: 2025-12-01 09:19:20.916 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5065MB free_disk=72.36612701416016GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 09:19:20 compute-0 nova_compute[189491]: 2025-12-01 09:19:20.916 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:19:20 compute-0 nova_compute[189491]: 2025-12-01 09:19:20.917 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:19:21 compute-0 nova_compute[189491]: 2025-12-01 09:19:21.001 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Instance 7ed22ffd-011d-48d7-962a-8606e471a59e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 09:19:21 compute-0 nova_compute[189491]: 2025-12-01 09:19:21.002 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Instance 11a8e94c-61e3-4805-b688-e4b9121b30ba actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 09:19:21 compute-0 nova_compute[189491]: 2025-12-01 09:19:21.002 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 09:19:21 compute-0 nova_compute[189491]: 2025-12-01 09:19:21.002 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 09:19:21 compute-0 nova_compute[189491]: 2025-12-01 09:19:21.058 189495 DEBUG nova.compute.provider_tree [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Inventory has not changed in ProviderTree for provider: 143c7fe7-af1f-477a-978c-6a994d785d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 09:19:21 compute-0 nova_compute[189491]: 2025-12-01 09:19:21.074 189495 DEBUG nova.scheduler.client.report [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Inventory has not changed for provider 143c7fe7-af1f-477a-978c-6a994d785d98 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 09:19:21 compute-0 nova_compute[189491]: 2025-12-01 09:19:21.076 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 09:19:21 compute-0 nova_compute[189491]: 2025-12-01 09:19:21.076 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.159s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:19:22 compute-0 nova_compute[189491]: 2025-12-01 09:19:22.075 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:19:22 compute-0 nova_compute[189491]: 2025-12-01 09:19:22.194 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:19:22 compute-0 nova_compute[189491]: 2025-12-01 09:19:22.247 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:19:22 compute-0 nova_compute[189491]: 2025-12-01 09:19:22.248 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:19:22 compute-0 nova_compute[189491]: 2025-12-01 09:19:22.248 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:19:22 compute-0 nova_compute[189491]: 2025-12-01 09:19:22.249 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 09:19:23 compute-0 nova_compute[189491]: 2025-12-01 09:19:23.715 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:19:23 compute-0 podman[241106]: 2025-12-01 09:19:23.742731082 +0000 UTC m=+0.114682435 container health_status 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 09:19:23 compute-0 podman[241107]: 2025-12-01 09:19:23.754167987 +0000 UTC m=+0.112153035 container health_status 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible)
Dec  1 09:19:24 compute-0 nova_compute[189491]: 2025-12-01 09:19:24.223 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:19:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:19:26.503 106659 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:19:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:19:26.504 106659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:19:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:19:26.505 106659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:19:27 compute-0 nova_compute[189491]: 2025-12-01 09:19:27.197 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:19:29 compute-0 nova_compute[189491]: 2025-12-01 09:19:29.228 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:19:29 compute-0 podman[203700]: time="2025-12-01T09:19:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 09:19:29 compute-0 podman[203700]: @ - - [01/Dec/2025:09:19:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Dec  1 09:19:29 compute-0 podman[203700]: @ - - [01/Dec/2025:09:19:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4793 "" "Go-http-client/1.1"
Dec  1 09:19:31 compute-0 openstack_network_exporter[205866]: ERROR   09:19:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:19:31 compute-0 openstack_network_exporter[205866]: ERROR   09:19:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:19:31 compute-0 openstack_network_exporter[205866]: ERROR   09:19:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 09:19:31 compute-0 openstack_network_exporter[205866]: ERROR   09:19:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 09:19:31 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:19:31 compute-0 openstack_network_exporter[205866]: ERROR   09:19:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 09:19:31 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:19:32 compute-0 nova_compute[189491]: 2025-12-01 09:19:32.201 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:19:34 compute-0 nova_compute[189491]: 2025-12-01 09:19:34.233 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:19:35 compute-0 podman[241161]: 2025-12-01 09:19:35.756346299 +0000 UTC m=+0.114609334 container health_status 6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  1 09:19:35 compute-0 podman[241162]: 2025-12-01 09:19:35.800610892 +0000 UTC m=+0.153163239 container health_status ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0)
Dec  1 09:19:37 compute-0 nova_compute[189491]: 2025-12-01 09:19:37.203 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:19:39 compute-0 nova_compute[189491]: 2025-12-01 09:19:39.237 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:19:41 compute-0 podman[241205]: 2025-12-01 09:19:41.706150134 +0000 UTC m=+0.066810396 container health_status e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=edpm, org.label-schema.build-date=20251125)
Dec  1 09:19:42 compute-0 nova_compute[189491]: 2025-12-01 09:19:42.205 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:19:44 compute-0 nova_compute[189491]: 2025-12-01 09:19:44.244 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:19:45 compute-0 podman[241225]: 2025-12-01 09:19:45.73706784 +0000 UTC m=+0.105020434 container health_status dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  1 09:19:45 compute-0 podman[241226]: 2025-12-01 09:19:45.771748912 +0000 UTC m=+0.123002094 container health_status f2d8639519b361376780abb83cb5a5e341c4f412720fb5852b2a4d261a69c359 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, release-0.7.12=, distribution-scope=public, version=9.4, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, release=1214.1726694543, com.redhat.component=ubi9-container, config_id=edpm, io.buildah.version=1.29.0, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Dec  1 09:19:47 compute-0 nova_compute[189491]: 2025-12-01 09:19:47.209 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:19:49 compute-0 nova_compute[189491]: 2025-12-01 09:19:49.248 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:19:50 compute-0 podman[241270]: 2025-12-01 09:19:50.75455281 +0000 UTC m=+0.118237010 container health_status f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Dec  1 09:19:50 compute-0 podman[241269]: 2025-12-01 09:19:50.770624387 +0000 UTC m=+0.138205781 container health_status 110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, name=ubi9-minimal, release=1755695350, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, vcs-type=git, config_id=edpm, maintainer=Red Hat, Inc., version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Dec  1 09:19:52 compute-0 nova_compute[189491]: 2025-12-01 09:19:52.215 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:19:54 compute-0 nova_compute[189491]: 2025-12-01 09:19:54.252 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:19:54 compute-0 podman[241308]: 2025-12-01 09:19:54.750414294 +0000 UTC m=+0.117224397 container health_status 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Dec  1 09:19:54 compute-0 podman[241309]: 2025-12-01 09:19:54.759647516 +0000 UTC m=+0.124845739 container health_status 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible)
Dec  1 09:19:57 compute-0 nova_compute[189491]: 2025-12-01 09:19:57.218 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:19:57 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Dec  1 09:19:59 compute-0 nova_compute[189491]: 2025-12-01 09:19:59.257 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:19:59 compute-0 podman[203700]: time="2025-12-01T09:19:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 09:19:59 compute-0 podman[203700]: @ - - [01/Dec/2025:09:19:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Dec  1 09:19:59 compute-0 podman[203700]: @ - - [01/Dec/2025:09:19:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4798 "" "Go-http-client/1.1"
Dec  1 09:20:01 compute-0 openstack_network_exporter[205866]: ERROR   09:20:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 09:20:01 compute-0 openstack_network_exporter[205866]: ERROR   09:20:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:20:01 compute-0 openstack_network_exporter[205866]: ERROR   09:20:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:20:01 compute-0 openstack_network_exporter[205866]: ERROR   09:20:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 09:20:01 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:20:01 compute-0 openstack_network_exporter[205866]: ERROR   09:20:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 09:20:01 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:20:02 compute-0 nova_compute[189491]: 2025-12-01 09:20:02.221 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:20:04 compute-0 nova_compute[189491]: 2025-12-01 09:20:04.262 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:20:06 compute-0 podman[241352]: 2025-12-01 09:20:06.757152476 +0000 UTC m=+0.108676402 container health_status 6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  1 09:20:06 compute-0 podman[241353]: 2025-12-01 09:20:06.771438969 +0000 UTC m=+0.126209892 container health_status ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Dec  1 09:20:07 compute-0 nova_compute[189491]: 2025-12-01 09:20:07.224 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:20:09 compute-0 nova_compute[189491]: 2025-12-01 09:20:09.268 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:20:12 compute-0 nova_compute[189491]: 2025-12-01 09:20:12.229 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:20:12 compute-0 podman[241393]: 2025-12-01 09:20:12.724195974 +0000 UTC m=+0.093623490 container health_status e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, io.buildah.version=1.41.3)
Dec  1 09:20:14 compute-0 nova_compute[189491]: 2025-12-01 09:20:14.275 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:20:16 compute-0 podman[241414]: 2025-12-01 09:20:16.727993778 +0000 UTC m=+0.092391420 container health_status f2d8639519b361376780abb83cb5a5e341c4f412720fb5852b2a4d261a69c359 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, distribution-scope=public, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, config_id=edpm, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release=1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, release-0.7.12=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, vendor=Red Hat, Inc., io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.expose-services=, managed_by=edpm_ansible, version=9.4)
Dec  1 09:20:16 compute-0 podman[241413]: 2025-12-01 09:20:16.757285372 +0000 UTC m=+0.120513576 container health_status dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  1 09:20:17 compute-0 nova_compute[189491]: 2025-12-01 09:20:17.281 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:20:19 compute-0 nova_compute[189491]: 2025-12-01 09:20:19.281 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:20:19 compute-0 nova_compute[189491]: 2025-12-01 09:20:19.713 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:20:19 compute-0 nova_compute[189491]: 2025-12-01 09:20:19.715 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 09:20:19 compute-0 nova_compute[189491]: 2025-12-01 09:20:19.716 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 09:20:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:19.780 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  1 09:20:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:19.782 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  1 09:20:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:19.782 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b020>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:20:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:19.783 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7ff84c98b0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:20:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:19.783 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84dc55100>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:20:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:19.784 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:20:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:19.784 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:20:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:19.784 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:20:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:19.784 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84ca1c260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:20:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:19.784 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:20:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:19.784 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:20:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:19.784 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84ff01af0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:20:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:19.784 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bb30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:20:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:19.784 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:20:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:19.785 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bb60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:20:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:19.785 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:20:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:19.785 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bbc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:20:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:19.785 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:20:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:19.785 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:20:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:19.785 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:20:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:19.785 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:20:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:19.785 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b5f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:20:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:19.785 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:20:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:19.786 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98be60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:20:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:19.786 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84f216690>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:20:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:19.786 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c9896d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:20:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:19.786 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:20:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:19.786 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98a720>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:20:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:19.786 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:20:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:19.792 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '7ed22ffd-011d-48d7-962a-8606e471a59e', 'name': 'test_0', 'flavor': {'id': '719a52fe-7f4b-48c0-b9dc-6a91d4ec600c', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '304c689d-2799-45ae-8166-517d5fd107b2'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'fac95b8a995a4174bfa966a8d9d9aa01', 'user_id': '962a55152ff34fdda5eae1f8aee3a7a9', 'hostId': '8e7812466a0145cddd68754a1627db4b56b0a23f372c34432aca44f1', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  1 09:20:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:19.798 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '11a8e94c-61e3-4805-b688-e4b9121b30ba', 'name': 'vn-a75cfa3-6buvcyjxf2ua-hietjgfclklq-vnf-3mwygpaab4vh', 'flavor': {'id': '719a52fe-7f4b-48c0-b9dc-6a91d4ec600c', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '304c689d-2799-45ae-8166-517d5fd107b2'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'fac95b8a995a4174bfa966a8d9d9aa01', 'user_id': '962a55152ff34fdda5eae1f8aee3a7a9', 'hostId': '8e7812466a0145cddd68754a1627db4b56b0a23f372c34432aca44f1', 'status': 'active', 'metadata': {'metering.server_group': '1555a697-b0aa-4429-98e7-26e6671e228d'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  1 09:20:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:19.798 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  1 09:20:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:19.798 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b020>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:20:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:19.799 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b020>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:20:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:19.799 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:20:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:19.802 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-01T09:20:19.799303) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:20:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:19.899 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:20:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:19.900 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:20:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:19.900 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:20:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:19.986 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:20:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:19.986 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:20:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:19.987 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:20:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:19.988 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  1 09:20:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:19.988 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7ff8501e1d00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:20:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:19.989 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  1 09:20:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:19.989 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84dc55100>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:20:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:19.989 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84dc55100>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:20:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:19.990 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:20:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:19.991 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-01T09:20:19.990169) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.028 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.allocation volume: 21831680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.029 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.029 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.059 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.allocation volume: 21831680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.060 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.060 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.061 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.062 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7ff84c98b110>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.062 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.062 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b140>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.063 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b140>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.063 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.063 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.read.latency volume: 476643826 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.064 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-01T09:20:20.063340) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.064 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.read.latency volume: 112985408 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.065 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.read.latency volume: 87581444 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.065 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.read.latency volume: 469977634 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.066 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.read.latency volume: 95101905 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.066 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.read.latency volume: 74341595 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.067 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.067 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7ff84c98b1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.068 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.068 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.068 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.068 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.069 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.069 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-01T09:20:20.068871) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.070 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.070 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.071 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.071 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.072 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.073 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.073 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7ff84c98b200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.073 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.074 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.074 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.074 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.075 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-01T09:20:20.074699) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.076 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.077 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.078 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.079 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.write.bytes volume: 41811968 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.080 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.081 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.082 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.082 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7ff84ca1c230>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.083 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.083 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84ca1c260>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.083 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84ca1c260>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.084 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.084 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-01T09:20:20.084261) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.117 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.146 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.147 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.147 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7ff84c98b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.147 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.148 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.148 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.148 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.149 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.write.latency volume: 1809136387 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.149 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-01T09:20:20.148827) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.150 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.write.latency volume: 11785635 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.151 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.151 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.write.latency volume: 1274991984 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.152 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.write.latency volume: 13179146 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.152 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.153 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.153 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7ff84c98b2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.154 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.154 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.154 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.155 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.156 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.write.requests volume: 230 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.156 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-01T09:20:20.155404) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.156 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.157 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.157 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.write.requests volume: 236 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.158 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.158 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.159 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.160 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7ff84c98b620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.160 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.161 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84ff01af0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.161 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84ff01af0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.162 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.162 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-01T09:20:20.162013) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.167 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/network.incoming.bytes volume: 1968 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.171 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/network.incoming.bytes volume: 4849 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.172 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.172 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7ff84c98b680>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.172 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.173 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7ff84c98b320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.173 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.174 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.174 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.174 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.175 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-01T09:20:20.174783) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.175 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.176 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7ff84c98b920>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.176 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.177 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bb60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.177 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bb60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.178 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.178 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/network.incoming.packets volume: 17 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.178 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-01T09:20:20.178092) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.179 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/network.incoming.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.180 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.180 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7ff84c98b380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.181 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.181 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b3b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.182 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b3b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.182 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.182 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-01T09:20:20.182321) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.183 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.183 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7ff84c98bb90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.184 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.184 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bbc0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.184 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bbc0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.185 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.185 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.185 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-01T09:20:20.185404) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.186 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.187 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.188 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7ff84c98bbf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.188 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.189 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bc20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.189 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bc20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.190 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.190 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.190 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-01T09:20:20.189935) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.191 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.191 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.192 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7ff84c98bc80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.192 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.192 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bcb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.192 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bcb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.192 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.193 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/network.outgoing.bytes volume: 2202 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.193 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/network.outgoing.bytes volume: 4690 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.194 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.194 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-01T09:20:20.192933) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.194 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7ff84c98bd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.195 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.195 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bd40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.195 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bd40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.196 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.196 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.197 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/network.outgoing.bytes.delta volume: 4690 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.197 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-01T09:20:20.196034) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.197 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.198 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7ff84c98bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.198 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.198 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7ff84c98b5c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.199 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.199 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b5f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.199 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b5f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.200 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.200 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/memory.usage volume: 48.9453125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.201 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-01T09:20:20.200042) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.201 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/memory.usage volume: 49.10546875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.201 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.202 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7ff84dc55040>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.202 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.202 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b650>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.202 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b650>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.203 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.203 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-01T09:20:20.203078) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.203 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.204 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/network.incoming.bytes.delta volume: 4759 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.204 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.205 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7ff84c98be30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.205 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.205 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98be60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.205 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98be60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.206 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.206 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/network.outgoing.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.206 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-01T09:20:20.206078) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.206 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/network.outgoing.packets volume: 40 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.207 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.207 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7ff8503b1b80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.207 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.208 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84f216690>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.208 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84f216690>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.208 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.208 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/cpu volume: 30430000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.209 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/cpu volume: 74650000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.209 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-01T09:20:20.208674) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.210 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.210 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7ff84dab3f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.210 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.210 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c9896d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.211 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c9896d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.211 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.211 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.211 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.212 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.212 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-01T09:20:20.211245) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.212 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.212 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.213 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.213 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.214 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7ff84c98bec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.214 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.214 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bef0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.214 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bef0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.215 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.215 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.216 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.216 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-01T09:20:20.215137) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.216 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.216 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7ff84c98b170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.217 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.217 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98a720>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.217 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98a720>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.217 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.218 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.218 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.218 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.219 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.219 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-01T09:20:20.217798) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.219 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.220 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.220 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.220 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7ff84c98bf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.221 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.221 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bf80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.221 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bf80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.221 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.222 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.222 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.222 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-01T09:20:20.221711) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.223 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.223 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.224 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.224 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.224 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.224 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.224 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.225 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.225 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.225 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.225 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.225 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.225 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.226 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.226 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.226 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.226 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.226 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.226 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.227 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.227 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.227 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.227 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.227 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.227 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.228 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:20:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:20:20.228 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:20:20 compute-0 nova_compute[189491]: 2025-12-01 09:20:20.345 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "refresh_cache-7ed22ffd-011d-48d7-962a-8606e471a59e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 09:20:20 compute-0 nova_compute[189491]: 2025-12-01 09:20:20.346 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquired lock "refresh_cache-7ed22ffd-011d-48d7-962a-8606e471a59e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 09:20:20 compute-0 nova_compute[189491]: 2025-12-01 09:20:20.346 189495 DEBUG nova.network.neutron [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] [instance: 7ed22ffd-011d-48d7-962a-8606e471a59e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  1 09:20:20 compute-0 nova_compute[189491]: 2025-12-01 09:20:20.347 189495 DEBUG nova.objects.instance [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 7ed22ffd-011d-48d7-962a-8606e471a59e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 09:20:21 compute-0 podman[241457]: 2025-12-01 09:20:21.714095936 +0000 UTC m=+0.078812194 container health_status 110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, release=1755695350, version=9.6, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, io.buildah.version=1.33.7, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, distribution-scope=public, config_id=edpm, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, vcs-type=git)
Dec  1 09:20:21 compute-0 podman[241458]: 2025-12-01 09:20:21.735839279 +0000 UTC m=+0.092477794 container health_status f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 09:20:22 compute-0 nova_compute[189491]: 2025-12-01 09:20:22.285 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:20:22 compute-0 nova_compute[189491]: 2025-12-01 09:20:22.601 189495 DEBUG nova.network.neutron [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] [instance: 7ed22ffd-011d-48d7-962a-8606e471a59e] Updating instance_info_cache with network_info: [{"id": "1632735e-15c5-4d6b-a450-baa001b88ac2", "address": "fa:16:3e:d4:bd:b4", "network": {"id": "52d15875-2a2e-463a-bc5d-8fa6b8466bff", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.55", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fac95b8a995a4174bfa966a8d9d9aa01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1632735e-15", "ovs_interfaceid": "1632735e-15c5-4d6b-a450-baa001b88ac2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 09:20:22 compute-0 nova_compute[189491]: 2025-12-01 09:20:22.622 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Releasing lock "refresh_cache-7ed22ffd-011d-48d7-962a-8606e471a59e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 09:20:22 compute-0 nova_compute[189491]: 2025-12-01 09:20:22.623 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] [instance: 7ed22ffd-011d-48d7-962a-8606e471a59e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  1 09:20:22 compute-0 nova_compute[189491]: 2025-12-01 09:20:22.624 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:20:22 compute-0 nova_compute[189491]: 2025-12-01 09:20:22.624 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:20:22 compute-0 nova_compute[189491]: 2025-12-01 09:20:22.625 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:20:22 compute-0 nova_compute[189491]: 2025-12-01 09:20:22.625 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:20:22 compute-0 nova_compute[189491]: 2025-12-01 09:20:22.644 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:20:22 compute-0 nova_compute[189491]: 2025-12-01 09:20:22.645 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:20:22 compute-0 nova_compute[189491]: 2025-12-01 09:20:22.645 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:20:22 compute-0 nova_compute[189491]: 2025-12-01 09:20:22.646 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 09:20:22 compute-0 nova_compute[189491]: 2025-12-01 09:20:22.721 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:20:22 compute-0 nova_compute[189491]: 2025-12-01 09:20:22.779 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:20:22 compute-0 nova_compute[189491]: 2025-12-01 09:20:22.780 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:20:22 compute-0 nova_compute[189491]: 2025-12-01 09:20:22.848 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:20:22 compute-0 nova_compute[189491]: 2025-12-01 09:20:22.850 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:20:22 compute-0 nova_compute[189491]: 2025-12-01 09:20:22.907 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk.eph0 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:20:22 compute-0 nova_compute[189491]: 2025-12-01 09:20:22.910 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:20:23 compute-0 nova_compute[189491]: 2025-12-01 09:20:23.001 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk.eph0 --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:20:23 compute-0 nova_compute[189491]: 2025-12-01 09:20:23.008 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:20:23 compute-0 nova_compute[189491]: 2025-12-01 09:20:23.093 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:20:23 compute-0 nova_compute[189491]: 2025-12-01 09:20:23.095 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:20:23 compute-0 nova_compute[189491]: 2025-12-01 09:20:23.158 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:20:23 compute-0 nova_compute[189491]: 2025-12-01 09:20:23.161 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:20:23 compute-0 nova_compute[189491]: 2025-12-01 09:20:23.228 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.eph0 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:20:23 compute-0 nova_compute[189491]: 2025-12-01 09:20:23.230 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:20:23 compute-0 nova_compute[189491]: 2025-12-01 09:20:23.302 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.eph0 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:20:23 compute-0 nova_compute[189491]: 2025-12-01 09:20:23.791 189495 WARNING nova.virt.libvirt.driver [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 09:20:23 compute-0 nova_compute[189491]: 2025-12-01 09:20:23.792 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5049MB free_disk=72.36612701416016GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 09:20:23 compute-0 nova_compute[189491]: 2025-12-01 09:20:23.793 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:20:23 compute-0 nova_compute[189491]: 2025-12-01 09:20:23.794 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:20:23 compute-0 nova_compute[189491]: 2025-12-01 09:20:23.907 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Instance 7ed22ffd-011d-48d7-962a-8606e471a59e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 09:20:23 compute-0 nova_compute[189491]: 2025-12-01 09:20:23.908 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Instance 11a8e94c-61e3-4805-b688-e4b9121b30ba actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 09:20:23 compute-0 nova_compute[189491]: 2025-12-01 09:20:23.909 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 09:20:23 compute-0 nova_compute[189491]: 2025-12-01 09:20:23.910 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 09:20:23 compute-0 nova_compute[189491]: 2025-12-01 09:20:23.994 189495 DEBUG nova.compute.provider_tree [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Inventory has not changed in ProviderTree for provider: 143c7fe7-af1f-477a-978c-6a994d785d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 09:20:24 compute-0 nova_compute[189491]: 2025-12-01 09:20:24.027 189495 DEBUG nova.scheduler.client.report [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Inventory has not changed for provider 143c7fe7-af1f-477a-978c-6a994d785d98 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 09:20:24 compute-0 nova_compute[189491]: 2025-12-01 09:20:24.031 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 09:20:24 compute-0 nova_compute[189491]: 2025-12-01 09:20:24.032 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.239s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:20:24 compute-0 nova_compute[189491]: 2025-12-01 09:20:24.122 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:20:24 compute-0 nova_compute[189491]: 2025-12-01 09:20:24.123 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:20:24 compute-0 nova_compute[189491]: 2025-12-01 09:20:24.124 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:20:24 compute-0 nova_compute[189491]: 2025-12-01 09:20:24.124 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 09:20:24 compute-0 nova_compute[189491]: 2025-12-01 09:20:24.284 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:20:25 compute-0 nova_compute[189491]: 2025-12-01 09:20:25.714 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:20:25 compute-0 podman[241523]: 2025-12-01 09:20:25.78078078 +0000 UTC m=+0.132087487 container health_status 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  1 09:20:25 compute-0 podman[241524]: 2025-12-01 09:20:25.822663703 +0000 UTC m=+0.167878433 container health_status 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3)
Dec  1 09:20:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:20:26.504 106659 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:20:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:20:26.505 106659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:20:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:20:26.506 106659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:20:27 compute-0 nova_compute[189491]: 2025-12-01 09:20:27.286 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:20:29 compute-0 nova_compute[189491]: 2025-12-01 09:20:29.289 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:20:29 compute-0 podman[203700]: time="2025-12-01T09:20:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 09:20:29 compute-0 podman[203700]: @ - - [01/Dec/2025:09:20:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Dec  1 09:20:29 compute-0 podman[203700]: @ - - [01/Dec/2025:09:20:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4797 "" "Go-http-client/1.1"
Dec  1 09:20:31 compute-0 openstack_network_exporter[205866]: ERROR   09:20:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 09:20:31 compute-0 openstack_network_exporter[205866]: ERROR   09:20:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:20:31 compute-0 openstack_network_exporter[205866]: ERROR   09:20:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:20:31 compute-0 openstack_network_exporter[205866]: ERROR   09:20:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 09:20:31 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:20:31 compute-0 openstack_network_exporter[205866]: ERROR   09:20:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 09:20:31 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:20:32 compute-0 nova_compute[189491]: 2025-12-01 09:20:32.288 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:20:34 compute-0 nova_compute[189491]: 2025-12-01 09:20:34.295 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:20:37 compute-0 nova_compute[189491]: 2025-12-01 09:20:37.290 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:20:37 compute-0 podman[241572]: 2025-12-01 09:20:37.73734142 +0000 UTC m=+0.096661397 container health_status 6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  1 09:20:37 compute-0 podman[241573]: 2025-12-01 09:20:37.749343508 +0000 UTC m=+0.105959430 container health_status ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=edpm, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  1 09:20:39 compute-0 nova_compute[189491]: 2025-12-01 09:20:39.302 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:20:42 compute-0 nova_compute[189491]: 2025-12-01 09:20:42.297 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:20:43 compute-0 podman[241615]: 2025-12-01 09:20:43.721057267 +0000 UTC m=+0.086972885 container health_status e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm)
Dec  1 09:20:44 compute-0 nova_compute[189491]: 2025-12-01 09:20:44.306 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:20:47 compute-0 nova_compute[189491]: 2025-12-01 09:20:47.301 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:20:47 compute-0 podman[241634]: 2025-12-01 09:20:47.752142502 +0000 UTC m=+0.110317715 container health_status dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  1 09:20:47 compute-0 podman[241635]: 2025-12-01 09:20:47.753225858 +0000 UTC m=+0.113497411 container health_status f2d8639519b361376780abb83cb5a5e341c4f412720fb5852b2a4d261a69c359 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, vendor=Red Hat, Inc., name=ubi9, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.openshift.expose-services=, release-0.7.12=, com.redhat.component=ubi9-container, distribution-scope=public, io.openshift.tags=base rhel9, release=1214.1726694543, architecture=x86_64, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, config_id=edpm, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Dec  1 09:20:49 compute-0 nova_compute[189491]: 2025-12-01 09:20:49.313 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:20:52 compute-0 nova_compute[189491]: 2025-12-01 09:20:52.303 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:20:52 compute-0 podman[241677]: 2025-12-01 09:20:52.701914092 +0000 UTC m=+0.079457796 container health_status 110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, io.openshift.expose-services=, config_id=edpm, io.buildah.version=1.33.7, vcs-type=git, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, release=1755695350, com.redhat.component=ubi9-minimal-container, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41)
Dec  1 09:20:52 compute-0 podman[241678]: 2025-12-01 09:20:52.713808496 +0000 UTC m=+0.085531310 container health_status f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 09:20:54 compute-0 nova_compute[189491]: 2025-12-01 09:20:54.318 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:20:56 compute-0 podman[241715]: 2025-12-01 09:20:56.742262109 +0000 UTC m=+0.115027468 container health_status 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  1 09:20:56 compute-0 podman[241714]: 2025-12-01 09:20:56.766343126 +0000 UTC m=+0.131704317 container health_status 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, container_name=multipathd, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  1 09:20:57 compute-0 nova_compute[189491]: 2025-12-01 09:20:57.306 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:20:59 compute-0 nova_compute[189491]: 2025-12-01 09:20:59.326 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:20:59 compute-0 podman[203700]: time="2025-12-01T09:20:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 09:20:59 compute-0 podman[203700]: @ - - [01/Dec/2025:09:20:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Dec  1 09:20:59 compute-0 podman[203700]: @ - - [01/Dec/2025:09:20:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4804 "" "Go-http-client/1.1"
Dec  1 09:21:01 compute-0 openstack_network_exporter[205866]: ERROR   09:21:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:21:01 compute-0 openstack_network_exporter[205866]: ERROR   09:21:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:21:01 compute-0 openstack_network_exporter[205866]: ERROR   09:21:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 09:21:01 compute-0 openstack_network_exporter[205866]: ERROR   09:21:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 09:21:01 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:21:01 compute-0 openstack_network_exporter[205866]: ERROR   09:21:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 09:21:01 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:21:02 compute-0 nova_compute[189491]: 2025-12-01 09:21:02.307 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:21:04 compute-0 nova_compute[189491]: 2025-12-01 09:21:04.333 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:21:07 compute-0 nova_compute[189491]: 2025-12-01 09:21:07.311 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:21:08 compute-0 podman[241757]: 2025-12-01 09:21:08.704563147 +0000 UTC m=+0.081850632 container health_status 6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  1 09:21:08 compute-0 podman[241758]: 2025-12-01 09:21:08.727026765 +0000 UTC m=+0.094715500 container health_status ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true)
Dec  1 09:21:09 compute-0 nova_compute[189491]: 2025-12-01 09:21:09.338 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:21:12 compute-0 nova_compute[189491]: 2025-12-01 09:21:12.315 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:21:14 compute-0 nova_compute[189491]: 2025-12-01 09:21:14.342 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:21:14 compute-0 podman[241799]: 2025-12-01 09:21:14.729931562 +0000 UTC m=+0.092471907 container health_status e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec  1 09:21:17 compute-0 nova_compute[189491]: 2025-12-01 09:21:17.317 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:21:18 compute-0 podman[241818]: 2025-12-01 09:21:18.735400022 +0000 UTC m=+0.098494610 container health_status dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  1 09:21:18 compute-0 podman[241819]: 2025-12-01 09:21:18.740498845 +0000 UTC m=+0.104708600 container health_status f2d8639519b361376780abb83cb5a5e341c4f412720fb5852b2a4d261a69c359 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, release=1214.1726694543, version=9.4, maintainer=Red Hat, Inc., architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, vcs-type=git, distribution-scope=public, io.openshift.expose-services=, release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9)
Dec  1 09:21:19 compute-0 nova_compute[189491]: 2025-12-01 09:21:19.347 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:21:19 compute-0 nova_compute[189491]: 2025-12-01 09:21:19.714 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:21:19 compute-0 nova_compute[189491]: 2025-12-01 09:21:19.715 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 09:21:20 compute-0 nova_compute[189491]: 2025-12-01 09:21:20.284 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "refresh_cache-11a8e94c-61e3-4805-b688-e4b9121b30ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 09:21:20 compute-0 nova_compute[189491]: 2025-12-01 09:21:20.285 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquired lock "refresh_cache-11a8e94c-61e3-4805-b688-e4b9121b30ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 09:21:20 compute-0 nova_compute[189491]: 2025-12-01 09:21:20.285 189495 DEBUG nova.network.neutron [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] [instance: 11a8e94c-61e3-4805-b688-e4b9121b30ba] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  1 09:21:21 compute-0 nova_compute[189491]: 2025-12-01 09:21:21.826 189495 DEBUG nova.network.neutron [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] [instance: 11a8e94c-61e3-4805-b688-e4b9121b30ba] Updating instance_info_cache with network_info: [{"id": "213d57d5-9e28-4606-937a-97375a401f82", "address": "fa:16:3e:03:b9:7c", "network": {"id": "52d15875-2a2e-463a-bc5d-8fa6b8466bff", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.178", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fac95b8a995a4174bfa966a8d9d9aa01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap213d57d5-9e", "ovs_interfaceid": "213d57d5-9e28-4606-937a-97375a401f82", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 09:21:21 compute-0 nova_compute[189491]: 2025-12-01 09:21:21.889 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Releasing lock "refresh_cache-11a8e94c-61e3-4805-b688-e4b9121b30ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 09:21:21 compute-0 nova_compute[189491]: 2025-12-01 09:21:21.890 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] [instance: 11a8e94c-61e3-4805-b688-e4b9121b30ba] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  1 09:21:21 compute-0 nova_compute[189491]: 2025-12-01 09:21:21.892 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:21:21 compute-0 nova_compute[189491]: 2025-12-01 09:21:21.892 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:21:21 compute-0 nova_compute[189491]: 2025-12-01 09:21:21.893 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:21:22 compute-0 nova_compute[189491]: 2025-12-01 09:21:22.323 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:21:22 compute-0 nova_compute[189491]: 2025-12-01 09:21:22.714 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:21:22 compute-0 nova_compute[189491]: 2025-12-01 09:21:22.715 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:21:22 compute-0 nova_compute[189491]: 2025-12-01 09:21:22.812 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:21:22 compute-0 nova_compute[189491]: 2025-12-01 09:21:22.813 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:21:22 compute-0 nova_compute[189491]: 2025-12-01 09:21:22.813 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:21:22 compute-0 nova_compute[189491]: 2025-12-01 09:21:22.814 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 09:21:23 compute-0 nova_compute[189491]: 2025-12-01 09:21:23.068 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:21:23 compute-0 nova_compute[189491]: 2025-12-01 09:21:23.135 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:21:23 compute-0 nova_compute[189491]: 2025-12-01 09:21:23.137 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:21:23 compute-0 nova_compute[189491]: 2025-12-01 09:21:23.204 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:21:23 compute-0 nova_compute[189491]: 2025-12-01 09:21:23.205 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:21:23 compute-0 nova_compute[189491]: 2025-12-01 09:21:23.295 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk.eph0 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:21:23 compute-0 nova_compute[189491]: 2025-12-01 09:21:23.297 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:21:23 compute-0 nova_compute[189491]: 2025-12-01 09:21:23.358 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk.eph0 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:21:23 compute-0 nova_compute[189491]: 2025-12-01 09:21:23.370 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:21:23 compute-0 nova_compute[189491]: 2025-12-01 09:21:23.433 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:21:23 compute-0 nova_compute[189491]: 2025-12-01 09:21:23.435 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:21:23 compute-0 nova_compute[189491]: 2025-12-01 09:21:23.516 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:21:23 compute-0 nova_compute[189491]: 2025-12-01 09:21:23.517 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:21:23 compute-0 nova_compute[189491]: 2025-12-01 09:21:23.622 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.eph0 --force-share --output=json" returned: 0 in 0.105s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:21:23 compute-0 nova_compute[189491]: 2025-12-01 09:21:23.624 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:21:23 compute-0 nova_compute[189491]: 2025-12-01 09:21:23.686 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.eph0 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:21:23 compute-0 podman[241881]: 2025-12-01 09:21:23.691026613 +0000 UTC m=+0.070739797 container health_status 110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, distribution-scope=public, io.buildah.version=1.33.7, name=ubi9-minimal, release=1755695350, vcs-type=git, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  1 09:21:23 compute-0 podman[241884]: 2025-12-01 09:21:23.693875671 +0000 UTC m=+0.068261767 container health_status f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Dec  1 09:21:24 compute-0 nova_compute[189491]: 2025-12-01 09:21:24.078 189495 WARNING nova.virt.libvirt.driver [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 09:21:24 compute-0 nova_compute[189491]: 2025-12-01 09:21:24.079 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5056MB free_disk=72.36515045166016GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 09:21:24 compute-0 nova_compute[189491]: 2025-12-01 09:21:24.080 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:21:24 compute-0 nova_compute[189491]: 2025-12-01 09:21:24.080 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:21:24 compute-0 nova_compute[189491]: 2025-12-01 09:21:24.268 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Instance 7ed22ffd-011d-48d7-962a-8606e471a59e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 09:21:24 compute-0 nova_compute[189491]: 2025-12-01 09:21:24.269 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Instance 11a8e94c-61e3-4805-b688-e4b9121b30ba actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 09:21:24 compute-0 nova_compute[189491]: 2025-12-01 09:21:24.269 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 09:21:24 compute-0 nova_compute[189491]: 2025-12-01 09:21:24.270 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 09:21:24 compute-0 nova_compute[189491]: 2025-12-01 09:21:24.340 189495 DEBUG nova.compute.provider_tree [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Inventory has not changed in ProviderTree for provider: 143c7fe7-af1f-477a-978c-6a994d785d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 09:21:24 compute-0 nova_compute[189491]: 2025-12-01 09:21:24.351 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:21:24 compute-0 nova_compute[189491]: 2025-12-01 09:21:24.382 189495 DEBUG nova.scheduler.client.report [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Inventory has not changed for provider 143c7fe7-af1f-477a-978c-6a994d785d98 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 09:21:24 compute-0 nova_compute[189491]: 2025-12-01 09:21:24.384 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 09:21:24 compute-0 nova_compute[189491]: 2025-12-01 09:21:24.384 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.304s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:21:26 compute-0 nova_compute[189491]: 2025-12-01 09:21:26.384 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:21:26 compute-0 nova_compute[189491]: 2025-12-01 09:21:26.385 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:21:26 compute-0 nova_compute[189491]: 2025-12-01 09:21:26.385 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:21:26 compute-0 nova_compute[189491]: 2025-12-01 09:21:26.386 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 09:21:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:21:26.506 106659 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:21:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:21:26.506 106659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:21:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:21:26.507 106659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:21:26 compute-0 nova_compute[189491]: 2025-12-01 09:21:26.711 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:21:27 compute-0 nova_compute[189491]: 2025-12-01 09:21:27.327 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:21:27 compute-0 podman[241923]: 2025-12-01 09:21:27.712677472 +0000 UTC m=+0.088212435 container health_status 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 09:21:27 compute-0 podman[241924]: 2025-12-01 09:21:27.737344024 +0000 UTC m=+0.113389248 container health_status 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, config_id=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  1 09:21:29 compute-0 nova_compute[189491]: 2025-12-01 09:21:29.355 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:21:29 compute-0 podman[203700]: time="2025-12-01T09:21:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 09:21:29 compute-0 podman[203700]: @ - - [01/Dec/2025:09:21:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Dec  1 09:21:29 compute-0 podman[203700]: @ - - [01/Dec/2025:09:21:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4799 "" "Go-http-client/1.1"
Dec  1 09:21:31 compute-0 openstack_network_exporter[205866]: ERROR   09:21:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:21:31 compute-0 openstack_network_exporter[205866]: ERROR   09:21:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:21:31 compute-0 openstack_network_exporter[205866]: ERROR   09:21:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 09:21:31 compute-0 openstack_network_exporter[205866]: ERROR   09:21:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 09:21:31 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:21:31 compute-0 openstack_network_exporter[205866]: ERROR   09:21:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 09:21:31 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:21:32 compute-0 nova_compute[189491]: 2025-12-01 09:21:32.333 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:21:34 compute-0 nova_compute[189491]: 2025-12-01 09:21:34.359 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:21:37 compute-0 nova_compute[189491]: 2025-12-01 09:21:37.336 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:21:39 compute-0 nova_compute[189491]: 2025-12-01 09:21:39.364 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:21:39 compute-0 podman[241968]: 2025-12-01 09:21:39.704703143 +0000 UTC m=+0.075077120 container health_status 6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  1 09:21:39 compute-0 podman[241969]: 2025-12-01 09:21:39.729439236 +0000 UTC m=+0.095744546 container health_status ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec  1 09:21:42 compute-0 nova_compute[189491]: 2025-12-01 09:21:42.344 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:21:44 compute-0 nova_compute[189491]: 2025-12-01 09:21:44.369 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:21:45 compute-0 podman[242011]: 2025-12-01 09:21:45.709556476 +0000 UTC m=+0.081856723 container health_status e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible)
Dec  1 09:21:47 compute-0 nova_compute[189491]: 2025-12-01 09:21:47.350 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:21:49 compute-0 nova_compute[189491]: 2025-12-01 09:21:49.377 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:21:49 compute-0 podman[242032]: 2025-12-01 09:21:49.768528279 +0000 UTC m=+0.121236566 container health_status dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  1 09:21:49 compute-0 podman[242033]: 2025-12-01 09:21:49.808279672 +0000 UTC m=+0.152733861 container health_status f2d8639519b361376780abb83cb5a5e341c4f412720fb5852b2a4d261a69c359 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, architecture=x86_64, config_id=edpm, io.openshift.expose-services=, maintainer=Red Hat, Inc., release-0.7.12=, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, distribution-scope=public, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.buildah.version=1.29.0, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec  1 09:21:52 compute-0 nova_compute[189491]: 2025-12-01 09:21:52.353 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:21:54 compute-0 nova_compute[189491]: 2025-12-01 09:21:54.382 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:21:54 compute-0 podman[242074]: 2025-12-01 09:21:54.753139654 +0000 UTC m=+0.118357118 container health_status 110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, vcs-type=git, distribution-scope=public, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, vendor=Red Hat, Inc., managed_by=edpm_ansible)
Dec  1 09:21:54 compute-0 podman[242075]: 2025-12-01 09:21:54.779711741 +0000 UTC m=+0.126126904 container health_status f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec  1 09:21:57 compute-0 nova_compute[189491]: 2025-12-01 09:21:57.355 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:21:58 compute-0 podman[242111]: 2025-12-01 09:21:58.766484254 +0000 UTC m=+0.125825956 container health_status 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec  1 09:21:58 compute-0 podman[242112]: 2025-12-01 09:21:58.832080326 +0000 UTC m=+0.171328666 container health_status 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec  1 09:21:59 compute-0 nova_compute[189491]: 2025-12-01 09:21:59.386 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:21:59 compute-0 podman[203700]: time="2025-12-01T09:21:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 09:21:59 compute-0 podman[203700]: @ - - [01/Dec/2025:09:21:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Dec  1 09:21:59 compute-0 podman[203700]: @ - - [01/Dec/2025:09:21:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4797 "" "Go-http-client/1.1"
Dec  1 09:22:01 compute-0 openstack_network_exporter[205866]: ERROR   09:22:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 09:22:01 compute-0 openstack_network_exporter[205866]: ERROR   09:22:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:22:01 compute-0 openstack_network_exporter[205866]: ERROR   09:22:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:22:01 compute-0 openstack_network_exporter[205866]: ERROR   09:22:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 09:22:01 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:22:01 compute-0 openstack_network_exporter[205866]: ERROR   09:22:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 09:22:01 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:22:02 compute-0 nova_compute[189491]: 2025-12-01 09:22:02.359 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:22:04 compute-0 nova_compute[189491]: 2025-12-01 09:22:04.389 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:22:07 compute-0 nova_compute[189491]: 2025-12-01 09:22:07.361 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:22:09 compute-0 nova_compute[189491]: 2025-12-01 09:22:09.393 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:22:10 compute-0 nova_compute[189491]: 2025-12-01 09:22:10.714 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:22:10 compute-0 nova_compute[189491]: 2025-12-01 09:22:10.714 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  1 09:22:10 compute-0 podman[242156]: 2025-12-01 09:22:10.752414839 +0000 UTC m=+0.108265636 container health_status ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec  1 09:22:10 compute-0 podman[242155]: 2025-12-01 09:22:10.754017547 +0000 UTC m=+0.129302349 container health_status 6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  1 09:22:12 compute-0 nova_compute[189491]: 2025-12-01 09:22:12.368 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:22:14 compute-0 nova_compute[189491]: 2025-12-01 09:22:14.398 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:22:16 compute-0 podman[242198]: 2025-12-01 09:22:16.718187135 +0000 UTC m=+0.085349266 container health_status e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, config_id=edpm, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi)
Dec  1 09:22:17 compute-0 nova_compute[189491]: 2025-12-01 09:22:17.372 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:22:19 compute-0 nova_compute[189491]: 2025-12-01 09:22:19.402 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:22:19 compute-0 nova_compute[189491]: 2025-12-01 09:22:19.715 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:22:19 compute-0 nova_compute[189491]: 2025-12-01 09:22:19.716 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 09:22:19 compute-0 nova_compute[189491]: 2025-12-01 09:22:19.716 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 09:22:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:19.781 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  1 09:22:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:19.781 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  1 09:22:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:19.781 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b020>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:22:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:19.782 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7ff84c98b0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:22:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:19.782 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84dc55100>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:22:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:19.782 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:22:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:19.783 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:22:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:19.783 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:22:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:19.783 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84ca1c260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:22:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:19.783 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:22:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:19.783 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:22:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:19.783 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84ff01af0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:22:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:19.783 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bb30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:22:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:19.783 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:22:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:19.784 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bb60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:22:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:19.784 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:22:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:19.784 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bbc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:22:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:19.784 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:22:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:19.784 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:22:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:19.784 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:22:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:19.784 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:22:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:19.785 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b5f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:22:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:19.785 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:22:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:19.785 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98be60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:22:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:19.785 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84f216690>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:22:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:19.785 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c9896d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:22:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:19.785 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:22:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:19.786 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98a720>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:22:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:19.787 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:22:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:19.788 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '7ed22ffd-011d-48d7-962a-8606e471a59e', 'name': 'test_0', 'flavor': {'id': '719a52fe-7f4b-48c0-b9dc-6a91d4ec600c', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '304c689d-2799-45ae-8166-517d5fd107b2'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'fac95b8a995a4174bfa966a8d9d9aa01', 'user_id': '962a55152ff34fdda5eae1f8aee3a7a9', 'hostId': '8e7812466a0145cddd68754a1627db4b56b0a23f372c34432aca44f1', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  1 09:22:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:19.791 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '11a8e94c-61e3-4805-b688-e4b9121b30ba', 'name': 'vn-a75cfa3-6buvcyjxf2ua-hietjgfclklq-vnf-3mwygpaab4vh', 'flavor': {'id': '719a52fe-7f4b-48c0-b9dc-6a91d4ec600c', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '304c689d-2799-45ae-8166-517d5fd107b2'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'fac95b8a995a4174bfa966a8d9d9aa01', 'user_id': '962a55152ff34fdda5eae1f8aee3a7a9', 'hostId': '8e7812466a0145cddd68754a1627db4b56b0a23f372c34432aca44f1', 'status': 'active', 'metadata': {'metering.server_group': '1555a697-b0aa-4429-98e7-26e6671e228d'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  1 09:22:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:19.792 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  1 09:22:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:19.792 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b020>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:22:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:19.792 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b020>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:22:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:19.792 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:22:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:19.793 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-01T09:22:19.792500) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:22:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:19.859 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:22:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:19.860 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:22:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:19.860 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:22:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:19.946 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:22:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:19.947 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:22:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:19.947 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:22:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:19.948 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  1 09:22:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:19.948 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7ff8501e1d00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:22:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:19.948 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  1 09:22:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:19.948 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84dc55100>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:22:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:19.949 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84dc55100>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:22:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:19.949 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:22:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:19.950 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-01T09:22:19.949144) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:22:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:19.976 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.allocation volume: 21831680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:22:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:19.976 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:22:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:19.977 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.030 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.allocation volume: 21831680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.031 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.031 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.032 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.032 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7ff84c98b110>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.032 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.032 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b140>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.032 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b140>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.033 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.033 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.read.latency volume: 476643826 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.033 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.read.latency volume: 112985408 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.033 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.read.latency volume: 87581444 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.034 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.read.latency volume: 469977634 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.034 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.read.latency volume: 95101905 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.034 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.read.latency volume: 74341595 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.035 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.035 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7ff84c98b1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.035 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-01T09:22:20.033076) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.036 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.036 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.037 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.037 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.037 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.037 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-01T09:22:20.037288) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.038 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.038 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.039 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.039 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.040 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.040 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.041 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7ff84c98b200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.041 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.041 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.042 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.042 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.042 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.042 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-01T09:22:20.042318) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.043 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.043 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.044 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.write.bytes volume: 41836544 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.044 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.045 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.046 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.046 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7ff84ca1c230>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.046 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.047 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84ca1c260>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.047 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84ca1c260>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.047 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.048 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-01T09:22:20.047840) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.077 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.109 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.110 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.110 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7ff84c98b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.110 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.111 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.111 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.111 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.111 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.write.latency volume: 1809136387 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.112 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-01T09:22:20.111441) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.112 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.write.latency volume: 11785635 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.113 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.113 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.write.latency volume: 1287067524 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.113 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.write.latency volume: 13179146 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.114 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.114 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.115 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7ff84c98b2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.115 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.115 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.115 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.115 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.116 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.write.requests volume: 230 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.116 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-01T09:22:20.115920) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.116 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.117 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.117 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.write.requests volume: 240 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.117 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.118 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.118 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.118 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7ff84c98b620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.119 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.119 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84ff01af0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.119 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84ff01af0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.119 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.120 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-01T09:22:20.119784) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.123 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/network.incoming.bytes volume: 1968 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.126 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/network.incoming.bytes volume: 4849 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.127 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.127 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7ff84c98b680>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.127 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.127 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7ff84c98b320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.127 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.128 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.128 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.128 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.128 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-01T09:22:20.128587) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.129 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.129 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7ff84c98b920>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.129 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.130 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bb60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.130 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bb60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.130 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.130 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/network.incoming.packets volume: 17 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.130 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-01T09:22:20.130380) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.130 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/network.incoming.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.131 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.131 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7ff84c98b380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.132 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.132 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b3b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.132 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b3b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.132 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.133 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-01T09:22:20.132856) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.133 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.133 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7ff84c98bb90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.134 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.134 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bbc0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.134 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bbc0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.134 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.134 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.135 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-01T09:22:20.134689) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.135 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.135 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.136 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7ff84c98bbf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.136 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.136 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bc20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.136 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bc20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.136 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.137 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-01T09:22:20.136906) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.137 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.137 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.138 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.138 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7ff84c98bc80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.138 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.138 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bcb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.139 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bcb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.139 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.139 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-01T09:22:20.139405) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.139 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/network.outgoing.bytes volume: 2272 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.140 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/network.outgoing.bytes volume: 4760 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.140 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.140 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7ff84c98bd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.141 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.141 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bd40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.141 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bd40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.141 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.142 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.142 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-01T09:22:20.141744) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.142 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.142 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.143 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7ff84c98bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.143 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.143 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7ff84c98b5c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.143 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.144 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b5f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.144 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b5f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.144 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.144 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/memory.usage volume: 48.9453125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.144 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-01T09:22:20.144432) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.145 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/memory.usage volume: 49.09765625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.145 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.145 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7ff84dc55040>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.145 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.146 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b650>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.146 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b650>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.146 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.146 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.146 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-01T09:22:20.146504) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.147 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.147 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.147 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7ff84c98be30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.148 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.148 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98be60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.148 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98be60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.148 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.149 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.149 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-01T09:22:20.148767) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.149 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/network.outgoing.packets volume: 41 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.149 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.150 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7ff8503b1b80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.150 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.150 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84f216690>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.150 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84f216690>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.151 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.151 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/cpu volume: 32120000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.151 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-01T09:22:20.151002) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.151 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/cpu volume: 194240000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.152 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.152 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7ff84dab3f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.152 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.152 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c9896d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.153 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c9896d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.153 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.153 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.153 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-01T09:22:20.153343) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.154 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.154 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.154 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.155 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.155 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.155 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.156 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7ff84c98bec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.156 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.156 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bef0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.156 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bef0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.157 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.157 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.157 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-01T09:22:20.156949) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.157 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.158 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.158 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7ff84c98b170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.158 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.158 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98a720>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.159 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98a720>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.159 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.159 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.159 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-01T09:22:20.159342) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.160 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.160 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.160 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.161 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.161 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.161 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.162 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7ff84c98bf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.162 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.162 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bf80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.162 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bf80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.163 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.163 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.163 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-01T09:22:20.162951) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.163 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.164 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.164 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.164 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.164 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.164 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.164 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.164 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.164 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.165 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.165 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.165 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.165 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.165 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.165 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.165 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.165 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.165 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.165 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.165 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.165 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.165 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.165 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.165 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.166 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.166 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.166 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:22:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:22:20.166 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:22:20 compute-0 nova_compute[189491]: 2025-12-01 09:22:20.334 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "refresh_cache-7ed22ffd-011d-48d7-962a-8606e471a59e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 09:22:20 compute-0 nova_compute[189491]: 2025-12-01 09:22:20.335 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquired lock "refresh_cache-7ed22ffd-011d-48d7-962a-8606e471a59e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 09:22:20 compute-0 nova_compute[189491]: 2025-12-01 09:22:20.335 189495 DEBUG nova.network.neutron [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] [instance: 7ed22ffd-011d-48d7-962a-8606e471a59e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  1 09:22:20 compute-0 nova_compute[189491]: 2025-12-01 09:22:20.335 189495 DEBUG nova.objects.instance [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 7ed22ffd-011d-48d7-962a-8606e471a59e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 09:22:20 compute-0 podman[242219]: 2025-12-01 09:22:20.763755687 +0000 UTC m=+0.122889796 container health_status dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  1 09:22:20 compute-0 podman[242220]: 2025-12-01 09:22:20.776899082 +0000 UTC m=+0.125143890 container health_status f2d8639519b361376780abb83cb5a5e341c4f412720fb5852b2a4d261a69c359 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.tags=base rhel9, vcs-type=git, com.redhat.component=ubi9-container, io.openshift.expose-services=, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., name=ubi9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.display-name=Red Hat Universal Base Image 9, container_name=kepler, vendor=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.29.0, release=1214.1726694543, build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, version=9.4)
Dec  1 09:22:22 compute-0 nova_compute[189491]: 2025-12-01 09:22:22.376 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:22:23 compute-0 nova_compute[189491]: 2025-12-01 09:22:23.698 189495 DEBUG nova.network.neutron [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] [instance: 7ed22ffd-011d-48d7-962a-8606e471a59e] Updating instance_info_cache with network_info: [{"id": "1632735e-15c5-4d6b-a450-baa001b88ac2", "address": "fa:16:3e:d4:bd:b4", "network": {"id": "52d15875-2a2e-463a-bc5d-8fa6b8466bff", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.55", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fac95b8a995a4174bfa966a8d9d9aa01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1632735e-15", "ovs_interfaceid": "1632735e-15c5-4d6b-a450-baa001b88ac2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 09:22:23 compute-0 nova_compute[189491]: 2025-12-01 09:22:23.724 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Releasing lock "refresh_cache-7ed22ffd-011d-48d7-962a-8606e471a59e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 09:22:23 compute-0 nova_compute[189491]: 2025-12-01 09:22:23.726 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] [instance: 7ed22ffd-011d-48d7-962a-8606e471a59e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  1 09:22:23 compute-0 nova_compute[189491]: 2025-12-01 09:22:23.727 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:22:23 compute-0 nova_compute[189491]: 2025-12-01 09:22:23.728 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:22:23 compute-0 nova_compute[189491]: 2025-12-01 09:22:23.729 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:22:23 compute-0 nova_compute[189491]: 2025-12-01 09:22:23.730 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:22:23 compute-0 nova_compute[189491]: 2025-12-01 09:22:23.757 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:22:23 compute-0 nova_compute[189491]: 2025-12-01 09:22:23.758 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:22:23 compute-0 nova_compute[189491]: 2025-12-01 09:22:23.759 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:22:23 compute-0 nova_compute[189491]: 2025-12-01 09:22:23.760 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 09:22:23 compute-0 nova_compute[189491]: 2025-12-01 09:22:23.844 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:22:23 compute-0 nova_compute[189491]: 2025-12-01 09:22:23.928 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:22:23 compute-0 nova_compute[189491]: 2025-12-01 09:22:23.929 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:22:23 compute-0 nova_compute[189491]: 2025-12-01 09:22:23.985 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:22:23 compute-0 nova_compute[189491]: 2025-12-01 09:22:23.987 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:22:24 compute-0 nova_compute[189491]: 2025-12-01 09:22:24.087 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk.eph0 --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:22:24 compute-0 nova_compute[189491]: 2025-12-01 09:22:24.089 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:22:24 compute-0 nova_compute[189491]: 2025-12-01 09:22:24.167 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk.eph0 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:22:24 compute-0 nova_compute[189491]: 2025-12-01 09:22:24.175 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:22:24 compute-0 nova_compute[189491]: 2025-12-01 09:22:24.236 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:22:24 compute-0 nova_compute[189491]: 2025-12-01 09:22:24.238 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:22:24 compute-0 nova_compute[189491]: 2025-12-01 09:22:24.302 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:22:24 compute-0 nova_compute[189491]: 2025-12-01 09:22:24.305 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:22:24 compute-0 nova_compute[189491]: 2025-12-01 09:22:24.391 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.eph0 --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:22:24 compute-0 nova_compute[189491]: 2025-12-01 09:22:24.394 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:22:24 compute-0 nova_compute[189491]: 2025-12-01 09:22:24.410 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:22:24 compute-0 nova_compute[189491]: 2025-12-01 09:22:24.453 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.eph0 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:22:24 compute-0 nova_compute[189491]: 2025-12-01 09:22:24.842 189495 WARNING nova.virt.libvirt.driver [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 09:22:24 compute-0 nova_compute[189491]: 2025-12-01 09:22:24.843 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5052MB free_disk=72.36520385742188GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 09:22:24 compute-0 nova_compute[189491]: 2025-12-01 09:22:24.843 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:22:24 compute-0 nova_compute[189491]: 2025-12-01 09:22:24.843 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:22:24 compute-0 nova_compute[189491]: 2025-12-01 09:22:24.929 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Instance 7ed22ffd-011d-48d7-962a-8606e471a59e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 09:22:24 compute-0 nova_compute[189491]: 2025-12-01 09:22:24.929 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Instance 11a8e94c-61e3-4805-b688-e4b9121b30ba actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 09:22:24 compute-0 nova_compute[189491]: 2025-12-01 09:22:24.929 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 09:22:24 compute-0 nova_compute[189491]: 2025-12-01 09:22:24.929 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 09:22:24 compute-0 nova_compute[189491]: 2025-12-01 09:22:24.947 189495 DEBUG nova.scheduler.client.report [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Refreshing inventories for resource provider 143c7fe7-af1f-477a-978c-6a994d785d98 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  1 09:22:24 compute-0 nova_compute[189491]: 2025-12-01 09:22:24.968 189495 DEBUG nova.scheduler.client.report [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Updating ProviderTree inventory for provider 143c7fe7-af1f-477a-978c-6a994d785d98 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  1 09:22:24 compute-0 nova_compute[189491]: 2025-12-01 09:22:24.969 189495 DEBUG nova.compute.provider_tree [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Updating inventory in ProviderTree for provider 143c7fe7-af1f-477a-978c-6a994d785d98 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  1 09:22:24 compute-0 nova_compute[189491]: 2025-12-01 09:22:24.985 189495 DEBUG nova.scheduler.client.report [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Refreshing aggregate associations for resource provider 143c7fe7-af1f-477a-978c-6a994d785d98, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  1 09:22:25 compute-0 nova_compute[189491]: 2025-12-01 09:22:25.012 189495 DEBUG nova.scheduler.client.report [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Refreshing trait associations for resource provider 143c7fe7-af1f-477a-978c-6a994d785d98, traits: COMPUTE_STORAGE_BUS_FDC,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_FMA3,HW_CPU_X86_SVM,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_AVX,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SHA,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_AVX2,HW_CPU_X86_ABM,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_USB,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_MMX,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE2,COMPUTE_ACCELERATORS,HW_CPU_X86_F16C,HW_CPU_X86_SSE42,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE41,HW_CPU_X86_BMI2,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE4A,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_CIRRUS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  1 09:22:25 compute-0 nova_compute[189491]: 2025-12-01 09:22:25.062 189495 DEBUG nova.compute.provider_tree [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Inventory has not changed in ProviderTree for provider: 143c7fe7-af1f-477a-978c-6a994d785d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 09:22:25 compute-0 nova_compute[189491]: 2025-12-01 09:22:25.078 189495 DEBUG nova.scheduler.client.report [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Inventory has not changed for provider 143c7fe7-af1f-477a-978c-6a994d785d98 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 09:22:25 compute-0 nova_compute[189491]: 2025-12-01 09:22:25.080 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 09:22:25 compute-0 nova_compute[189491]: 2025-12-01 09:22:25.080 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.236s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:22:25 compute-0 nova_compute[189491]: 2025-12-01 09:22:25.080 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:22:25 compute-0 nova_compute[189491]: 2025-12-01 09:22:25.081 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec  1 09:22:25 compute-0 nova_compute[189491]: 2025-12-01 09:22:25.095 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec  1 09:22:25 compute-0 nova_compute[189491]: 2025-12-01 09:22:25.096 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:22:25 compute-0 podman[242283]: 2025-12-01 09:22:25.747441429 +0000 UTC m=+0.095050608 container health_status f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125)
Dec  1 09:22:25 compute-0 podman[242282]: 2025-12-01 09:22:25.784809205 +0000 UTC m=+0.138366577 container health_status 110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, container_name=openstack_network_exporter, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, name=ubi9-minimal, io.openshift.tags=minimal rhel9)
Dec  1 09:22:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:22:26.506 106659 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:22:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:22:26.506 106659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:22:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:22:26.507 106659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:22:27 compute-0 nova_compute[189491]: 2025-12-01 09:22:27.382 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:22:28 compute-0 nova_compute[189491]: 2025-12-01 09:22:28.097 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:22:28 compute-0 nova_compute[189491]: 2025-12-01 09:22:28.097 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:22:28 compute-0 nova_compute[189491]: 2025-12-01 09:22:28.098 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:22:28 compute-0 nova_compute[189491]: 2025-12-01 09:22:28.098 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:22:28 compute-0 nova_compute[189491]: 2025-12-01 09:22:28.098 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 09:22:29 compute-0 nova_compute[189491]: 2025-12-01 09:22:29.418 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:22:29 compute-0 podman[203700]: time="2025-12-01T09:22:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 09:22:29 compute-0 podman[203700]: @ - - [01/Dec/2025:09:22:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Dec  1 09:22:29 compute-0 podman[242321]: 2025-12-01 09:22:29.743240339 +0000 UTC m=+0.119051374 container health_status 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 09:22:29 compute-0 podman[203700]: @ - - [01/Dec/2025:09:22:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4801 "" "Go-http-client/1.1"
Dec  1 09:22:29 compute-0 podman[242322]: 2025-12-01 09:22:29.785537703 +0000 UTC m=+0.154767050 container health_status 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller)
Dec  1 09:22:31 compute-0 openstack_network_exporter[205866]: ERROR   09:22:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 09:22:31 compute-0 openstack_network_exporter[205866]: ERROR   09:22:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:22:31 compute-0 openstack_network_exporter[205866]: ERROR   09:22:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:22:31 compute-0 openstack_network_exporter[205866]: ERROR   09:22:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 09:22:31 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:22:31 compute-0 openstack_network_exporter[205866]: ERROR   09:22:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 09:22:31 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:22:32 compute-0 nova_compute[189491]: 2025-12-01 09:22:32.385 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:22:34 compute-0 nova_compute[189491]: 2025-12-01 09:22:34.422 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:22:37 compute-0 nova_compute[189491]: 2025-12-01 09:22:37.387 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:22:39 compute-0 nova_compute[189491]: 2025-12-01 09:22:39.426 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:22:41 compute-0 podman[242365]: 2025-12-01 09:22:41.719585849 +0000 UTC m=+0.081134554 container health_status 6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  1 09:22:41 compute-0 podman[242366]: 2025-12-01 09:22:41.747625398 +0000 UTC m=+0.110868834 container health_status ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible)
Dec  1 09:22:42 compute-0 nova_compute[189491]: 2025-12-01 09:22:42.389 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:22:44 compute-0 nova_compute[189491]: 2025-12-01 09:22:44.432 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:22:47 compute-0 nova_compute[189491]: 2025-12-01 09:22:47.392 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:22:47 compute-0 podman[242409]: 2025-12-01 09:22:47.714348693 +0000 UTC m=+0.088031450 container health_status e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 09:22:49 compute-0 nova_compute[189491]: 2025-12-01 09:22:49.438 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:22:51 compute-0 podman[242429]: 2025-12-01 09:22:51.699114095 +0000 UTC m=+0.073211832 container health_status f2d8639519b361376780abb83cb5a5e341c4f412720fb5852b2a4d261a69c359 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, distribution-scope=public, io.openshift.tags=base rhel9, vcs-type=git, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., name=ubi9, maintainer=Red Hat, Inc., architecture=x86_64, container_name=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, release-0.7.12=, build-date=2024-09-18T21:23:30)
Dec  1 09:22:51 compute-0 podman[242428]: 2025-12-01 09:22:51.711106006 +0000 UTC m=+0.088581154 container health_status dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 09:22:52 compute-0 nova_compute[189491]: 2025-12-01 09:22:52.395 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:22:54 compute-0 nova_compute[189491]: 2025-12-01 09:22:54.444 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:22:56 compute-0 podman[242473]: 2025-12-01 09:22:56.705011604 +0000 UTC m=+0.072013253 container health_status 110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., name=ubi9-minimal, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., vcs-type=git, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, distribution-scope=public, com.redhat.component=ubi9-minimal-container)
Dec  1 09:22:56 compute-0 podman[242474]: 2025-12-01 09:22:56.797305658 +0000 UTC m=+0.143844162 container health_status f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent)
Dec  1 09:22:57 compute-0 nova_compute[189491]: 2025-12-01 09:22:57.398 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:22:59 compute-0 nova_compute[189491]: 2025-12-01 09:22:59.448 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:22:59 compute-0 podman[203700]: time="2025-12-01T09:22:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 09:22:59 compute-0 podman[203700]: @ - - [01/Dec/2025:09:22:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Dec  1 09:22:59 compute-0 podman[203700]: @ - - [01/Dec/2025:09:22:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4806 "" "Go-http-client/1.1"
Dec  1 09:23:00 compute-0 podman[242511]: 2025-12-01 09:23:00.716865612 +0000 UTC m=+0.078663184 container health_status 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  1 09:23:00 compute-0 podman[242512]: 2025-12-01 09:23:00.765843077 +0000 UTC m=+0.117394852 container health_status 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller)
Dec  1 09:23:01 compute-0 openstack_network_exporter[205866]: ERROR   09:23:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:23:01 compute-0 openstack_network_exporter[205866]: ERROR   09:23:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:23:01 compute-0 openstack_network_exporter[205866]: ERROR   09:23:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 09:23:01 compute-0 openstack_network_exporter[205866]: ERROR   09:23:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 09:23:01 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:23:01 compute-0 openstack_network_exporter[205866]: ERROR   09:23:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 09:23:01 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:23:02 compute-0 nova_compute[189491]: 2025-12-01 09:23:02.400 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:23:04 compute-0 nova_compute[189491]: 2025-12-01 09:23:04.453 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:23:07 compute-0 nova_compute[189491]: 2025-12-01 09:23:07.403 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:23:09 compute-0 nova_compute[189491]: 2025-12-01 09:23:09.457 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:23:12 compute-0 nova_compute[189491]: 2025-12-01 09:23:12.405 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:23:12 compute-0 podman[242553]: 2025-12-01 09:23:12.712336444 +0000 UTC m=+0.085938090 container health_status 6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  1 09:23:12 compute-0 podman[242554]: 2025-12-01 09:23:12.721392944 +0000 UTC m=+0.092784056 container health_status ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125)
Dec  1 09:23:14 compute-0 nova_compute[189491]: 2025-12-01 09:23:14.461 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:23:17 compute-0 nova_compute[189491]: 2025-12-01 09:23:17.405 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:23:18 compute-0 podman[242595]: 2025-12-01 09:23:18.688218131 +0000 UTC m=+0.066393757 container health_status e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, config_id=edpm, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec  1 09:23:19 compute-0 nova_compute[189491]: 2025-12-01 09:23:19.466 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:23:20 compute-0 nova_compute[189491]: 2025-12-01 09:23:20.718 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:23:20 compute-0 nova_compute[189491]: 2025-12-01 09:23:20.718 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 09:23:20 compute-0 nova_compute[189491]: 2025-12-01 09:23:20.974 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "refresh_cache-11a8e94c-61e3-4805-b688-e4b9121b30ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 09:23:20 compute-0 nova_compute[189491]: 2025-12-01 09:23:20.975 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquired lock "refresh_cache-11a8e94c-61e3-4805-b688-e4b9121b30ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 09:23:20 compute-0 nova_compute[189491]: 2025-12-01 09:23:20.976 189495 DEBUG nova.network.neutron [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] [instance: 11a8e94c-61e3-4805-b688-e4b9121b30ba] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  1 09:23:22 compute-0 nova_compute[189491]: 2025-12-01 09:23:22.408 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:23:22 compute-0 nova_compute[189491]: 2025-12-01 09:23:22.430 189495 DEBUG nova.network.neutron [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] [instance: 11a8e94c-61e3-4805-b688-e4b9121b30ba] Updating instance_info_cache with network_info: [{"id": "213d57d5-9e28-4606-937a-97375a401f82", "address": "fa:16:3e:03:b9:7c", "network": {"id": "52d15875-2a2e-463a-bc5d-8fa6b8466bff", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.178", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fac95b8a995a4174bfa966a8d9d9aa01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap213d57d5-9e", "ovs_interfaceid": "213d57d5-9e28-4606-937a-97375a401f82", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 09:23:22 compute-0 nova_compute[189491]: 2025-12-01 09:23:22.450 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Releasing lock "refresh_cache-11a8e94c-61e3-4805-b688-e4b9121b30ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 09:23:22 compute-0 nova_compute[189491]: 2025-12-01 09:23:22.452 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] [instance: 11a8e94c-61e3-4805-b688-e4b9121b30ba] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  1 09:23:22 compute-0 nova_compute[189491]: 2025-12-01 09:23:22.453 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:23:22 compute-0 podman[242616]: 2025-12-01 09:23:22.759255361 +0000 UTC m=+0.113796954 container health_status dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  1 09:23:22 compute-0 podman[242617]: 2025-12-01 09:23:22.790135628 +0000 UTC m=+0.137191660 container health_status f2d8639519b361376780abb83cb5a5e341c4f412720fb5852b2a4d261a69c359 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, vcs-type=git, maintainer=Red Hat, Inc., release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, container_name=kepler, name=ubi9, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.expose-services=, distribution-scope=public, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.openshift.tags=base rhel9)
Dec  1 09:23:24 compute-0 nova_compute[189491]: 2025-12-01 09:23:24.472 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:23:24 compute-0 nova_compute[189491]: 2025-12-01 09:23:24.715 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:23:24 compute-0 nova_compute[189491]: 2025-12-01 09:23:24.716 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:23:25 compute-0 nova_compute[189491]: 2025-12-01 09:23:25.713 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:23:25 compute-0 nova_compute[189491]: 2025-12-01 09:23:25.714 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:23:25 compute-0 nova_compute[189491]: 2025-12-01 09:23:25.844 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:23:25 compute-0 nova_compute[189491]: 2025-12-01 09:23:25.844 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:23:25 compute-0 nova_compute[189491]: 2025-12-01 09:23:25.845 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:23:25 compute-0 nova_compute[189491]: 2025-12-01 09:23:25.845 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 09:23:26 compute-0 nova_compute[189491]: 2025-12-01 09:23:26.087 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:23:26 compute-0 nova_compute[189491]: 2025-12-01 09:23:26.190 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk --force-share --output=json" returned: 0 in 0.104s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:23:26 compute-0 nova_compute[189491]: 2025-12-01 09:23:26.192 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:23:26 compute-0 nova_compute[189491]: 2025-12-01 09:23:26.264 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:23:26 compute-0 nova_compute[189491]: 2025-12-01 09:23:26.266 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:23:26 compute-0 nova_compute[189491]: 2025-12-01 09:23:26.361 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk.eph0 --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:23:26 compute-0 nova_compute[189491]: 2025-12-01 09:23:26.364 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:23:26 compute-0 nova_compute[189491]: 2025-12-01 09:23:26.457 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk.eph0 --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:23:26 compute-0 nova_compute[189491]: 2025-12-01 09:23:26.465 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:23:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:23:26.507 106659 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:23:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:23:26.508 106659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:23:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:23:26.508 106659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:23:26 compute-0 nova_compute[189491]: 2025-12-01 09:23:26.556 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:23:26 compute-0 nova_compute[189491]: 2025-12-01 09:23:26.558 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:23:26 compute-0 nova_compute[189491]: 2025-12-01 09:23:26.636 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:23:26 compute-0 nova_compute[189491]: 2025-12-01 09:23:26.649 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:23:26 compute-0 nova_compute[189491]: 2025-12-01 09:23:26.727 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.eph0 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:23:26 compute-0 nova_compute[189491]: 2025-12-01 09:23:26.730 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:23:26 compute-0 nova_compute[189491]: 2025-12-01 09:23:26.820 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.eph0 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:23:27 compute-0 nova_compute[189491]: 2025-12-01 09:23:27.258 189495 WARNING nova.virt.libvirt.driver [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 09:23:27 compute-0 nova_compute[189491]: 2025-12-01 09:23:27.260 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5055MB free_disk=72.36501693725586GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 09:23:27 compute-0 nova_compute[189491]: 2025-12-01 09:23:27.261 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:23:27 compute-0 nova_compute[189491]: 2025-12-01 09:23:27.261 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:23:27 compute-0 nova_compute[189491]: 2025-12-01 09:23:27.410 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:23:27 compute-0 nova_compute[189491]: 2025-12-01 09:23:27.560 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Instance 7ed22ffd-011d-48d7-962a-8606e471a59e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 09:23:27 compute-0 nova_compute[189491]: 2025-12-01 09:23:27.565 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Instance 11a8e94c-61e3-4805-b688-e4b9121b30ba actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 09:23:27 compute-0 nova_compute[189491]: 2025-12-01 09:23:27.566 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 09:23:27 compute-0 nova_compute[189491]: 2025-12-01 09:23:27.567 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 09:23:27 compute-0 podman[242684]: 2025-12-01 09:23:27.718508801 +0000 UTC m=+0.083591654 container health_status f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec  1 09:23:27 compute-0 nova_compute[189491]: 2025-12-01 09:23:27.742 189495 DEBUG nova.compute.provider_tree [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Inventory has not changed in ProviderTree for provider: 143c7fe7-af1f-477a-978c-6a994d785d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 09:23:27 compute-0 podman[242683]: 2025-12-01 09:23:27.749606743 +0000 UTC m=+0.110966895 container health_status 110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, version=9.6, release=1755695350, managed_by=edpm_ansible, name=ubi9-minimal, io.openshift.expose-services=, container_name=openstack_network_exporter, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Dec  1 09:23:27 compute-0 nova_compute[189491]: 2025-12-01 09:23:27.758 189495 DEBUG nova.scheduler.client.report [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Inventory has not changed for provider 143c7fe7-af1f-477a-978c-6a994d785d98 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 09:23:27 compute-0 nova_compute[189491]: 2025-12-01 09:23:27.759 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 09:23:27 compute-0 nova_compute[189491]: 2025-12-01 09:23:27.760 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.499s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:23:28 compute-0 nova_compute[189491]: 2025-12-01 09:23:28.761 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:23:28 compute-0 nova_compute[189491]: 2025-12-01 09:23:28.784 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:23:28 compute-0 nova_compute[189491]: 2025-12-01 09:23:28.785 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:23:28 compute-0 nova_compute[189491]: 2025-12-01 09:23:28.786 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:23:28 compute-0 nova_compute[189491]: 2025-12-01 09:23:28.787 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 09:23:29 compute-0 nova_compute[189491]: 2025-12-01 09:23:29.475 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:23:29 compute-0 podman[203700]: time="2025-12-01T09:23:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 09:23:29 compute-0 podman[203700]: @ - - [01/Dec/2025:09:23:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Dec  1 09:23:29 compute-0 podman[203700]: @ - - [01/Dec/2025:09:23:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4803 "" "Go-http-client/1.1"
Dec  1 09:23:31 compute-0 openstack_network_exporter[205866]: ERROR   09:23:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 09:23:31 compute-0 openstack_network_exporter[205866]: ERROR   09:23:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 09:23:31 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:23:31 compute-0 openstack_network_exporter[205866]: ERROR   09:23:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:23:31 compute-0 openstack_network_exporter[205866]: ERROR   09:23:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 09:23:31 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:23:31 compute-0 openstack_network_exporter[205866]: ERROR   09:23:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:23:31 compute-0 podman[242719]: 2025-12-01 09:23:31.744692126 +0000 UTC m=+0.104867809 container health_status 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.build-date=20251125)
Dec  1 09:23:31 compute-0 podman[242720]: 2025-12-01 09:23:31.793571327 +0000 UTC m=+0.159081919 container health_status 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 09:23:32 compute-0 nova_compute[189491]: 2025-12-01 09:23:32.413 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:23:34 compute-0 nova_compute[189491]: 2025-12-01 09:23:34.480 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:23:37 compute-0 nova_compute[189491]: 2025-12-01 09:23:37.415 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:23:39 compute-0 nova_compute[189491]: 2025-12-01 09:23:39.485 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:23:42 compute-0 nova_compute[189491]: 2025-12-01 09:23:42.419 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:23:43 compute-0 podman[242765]: 2025-12-01 09:23:43.708910632 +0000 UTC m=+0.084688130 container health_status 6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  1 09:23:43 compute-0 podman[242766]: 2025-12-01 09:23:43.711123855 +0000 UTC m=+0.075383025 container health_status ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image)
Dec  1 09:23:44 compute-0 nova_compute[189491]: 2025-12-01 09:23:44.488 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:23:47 compute-0 nova_compute[189491]: 2025-12-01 09:23:47.422 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:23:49 compute-0 nova_compute[189491]: 2025-12-01 09:23:49.491 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:23:49 compute-0 podman[242807]: 2025-12-01 09:23:49.68568204 +0000 UTC m=+0.059344226 container health_status e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec  1 09:23:52 compute-0 nova_compute[189491]: 2025-12-01 09:23:52.425 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:23:53 compute-0 podman[242827]: 2025-12-01 09:23:53.68769796 +0000 UTC m=+0.064163913 container health_status dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  1 09:23:53 compute-0 podman[242828]: 2025-12-01 09:23:53.723776113 +0000 UTC m=+0.089139977 container health_status f2d8639519b361376780abb83cb5a5e341c4f412720fb5852b2a4d261a69c359 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, name=ubi9, container_name=kepler, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., release=1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, version=9.4, com.redhat.component=ubi9-container, distribution-scope=public, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, io.buildah.version=1.29.0)
Dec  1 09:23:54 compute-0 nova_compute[189491]: 2025-12-01 09:23:54.495 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:23:57 compute-0 nova_compute[189491]: 2025-12-01 09:23:57.427 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:23:57 compute-0 nova_compute[189491]: 2025-12-01 09:23:57.504 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:23:57 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:23:57.504 106659 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:2b:76', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'f6:fe:a3:90:0a:20'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 09:23:57 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:23:57.505 106659 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  1 09:23:58 compute-0 podman[242873]: 2025-12-01 09:23:58.706190723 +0000 UTC m=+0.071991003 container health_status f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 09:23:58 compute-0 podman[242872]: 2025-12-01 09:23:58.731772482 +0000 UTC m=+0.098805401 container health_status 110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., managed_by=edpm_ansible, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, architecture=x86_64)
Dec  1 09:23:59 compute-0 nova_compute[189491]: 2025-12-01 09:23:59.499 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:23:59 compute-0 podman[203700]: time="2025-12-01T09:23:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 09:23:59 compute-0 podman[203700]: @ - - [01/Dec/2025:09:23:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Dec  1 09:23:59 compute-0 podman[203700]: @ - - [01/Dec/2025:09:23:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4800 "" "Go-http-client/1.1"
Dec  1 09:24:01 compute-0 openstack_network_exporter[205866]: ERROR   09:24:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:24:01 compute-0 openstack_network_exporter[205866]: ERROR   09:24:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 09:24:01 compute-0 openstack_network_exporter[205866]: ERROR   09:24:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:24:01 compute-0 openstack_network_exporter[205866]: ERROR   09:24:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 09:24:01 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:24:01 compute-0 openstack_network_exporter[205866]: ERROR   09:24:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 09:24:01 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:24:02 compute-0 nova_compute[189491]: 2025-12-01 09:24:02.435 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:24:02 compute-0 podman[242910]: 2025-12-01 09:24:02.69361563 +0000 UTC m=+0.070660631 container health_status 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3)
Dec  1 09:24:02 compute-0 podman[242911]: 2025-12-01 09:24:02.743825575 +0000 UTC m=+0.111507849 container health_status 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller)
Dec  1 09:24:04 compute-0 nova_compute[189491]: 2025-12-01 09:24:04.505 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:24:06 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:24:06.507 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=203a4433-d8f4-4d80-8084-548a6d57cd5d, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:24:07 compute-0 nova_compute[189491]: 2025-12-01 09:24:07.440 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:24:09 compute-0 nova_compute[189491]: 2025-12-01 09:24:09.103 189495 DEBUG oslo_concurrency.lockutils [None req-5a663ef5-aa97-4eb3-871f-7aca50005b00 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Acquiring lock "350d2bc4-8489-4a5a-991a-99e32671f962" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:24:09 compute-0 nova_compute[189491]: 2025-12-01 09:24:09.103 189495 DEBUG oslo_concurrency.lockutils [None req-5a663ef5-aa97-4eb3-871f-7aca50005b00 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lock "350d2bc4-8489-4a5a-991a-99e32671f962" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:24:09 compute-0 nova_compute[189491]: 2025-12-01 09:24:09.313 189495 DEBUG nova.compute.manager [None req-5a663ef5-aa97-4eb3-871f-7aca50005b00 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 350d2bc4-8489-4a5a-991a-99e32671f962] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  1 09:24:09 compute-0 nova_compute[189491]: 2025-12-01 09:24:09.376 189495 DEBUG oslo_concurrency.lockutils [None req-5a663ef5-aa97-4eb3-871f-7aca50005b00 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:24:09 compute-0 nova_compute[189491]: 2025-12-01 09:24:09.377 189495 DEBUG oslo_concurrency.lockutils [None req-5a663ef5-aa97-4eb3-871f-7aca50005b00 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:24:09 compute-0 nova_compute[189491]: 2025-12-01 09:24:09.388 189495 DEBUG nova.virt.hardware [None req-5a663ef5-aa97-4eb3-871f-7aca50005b00 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  1 09:24:09 compute-0 nova_compute[189491]: 2025-12-01 09:24:09.388 189495 INFO nova.compute.claims [None req-5a663ef5-aa97-4eb3-871f-7aca50005b00 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 350d2bc4-8489-4a5a-991a-99e32671f962] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  1 09:24:09 compute-0 nova_compute[189491]: 2025-12-01 09:24:09.511 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:24:09 compute-0 nova_compute[189491]: 2025-12-01 09:24:09.716 189495 DEBUG nova.compute.provider_tree [None req-5a663ef5-aa97-4eb3-871f-7aca50005b00 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Inventory has not changed in ProviderTree for provider: 143c7fe7-af1f-477a-978c-6a994d785d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 09:24:09 compute-0 nova_compute[189491]: 2025-12-01 09:24:09.732 189495 DEBUG nova.scheduler.client.report [None req-5a663ef5-aa97-4eb3-871f-7aca50005b00 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Inventory has not changed for provider 143c7fe7-af1f-477a-978c-6a994d785d98 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 09:24:09 compute-0 nova_compute[189491]: 2025-12-01 09:24:09.759 189495 DEBUG oslo_concurrency.lockutils [None req-5a663ef5-aa97-4eb3-871f-7aca50005b00 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.382s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:24:09 compute-0 nova_compute[189491]: 2025-12-01 09:24:09.760 189495 DEBUG nova.compute.manager [None req-5a663ef5-aa97-4eb3-871f-7aca50005b00 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 350d2bc4-8489-4a5a-991a-99e32671f962] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  1 09:24:09 compute-0 nova_compute[189491]: 2025-12-01 09:24:09.829 189495 DEBUG nova.compute.manager [None req-5a663ef5-aa97-4eb3-871f-7aca50005b00 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 350d2bc4-8489-4a5a-991a-99e32671f962] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  1 09:24:09 compute-0 nova_compute[189491]: 2025-12-01 09:24:09.829 189495 DEBUG nova.network.neutron [None req-5a663ef5-aa97-4eb3-871f-7aca50005b00 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 350d2bc4-8489-4a5a-991a-99e32671f962] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  1 09:24:10 compute-0 nova_compute[189491]: 2025-12-01 09:24:10.004 189495 INFO nova.virt.libvirt.driver [None req-5a663ef5-aa97-4eb3-871f-7aca50005b00 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 350d2bc4-8489-4a5a-991a-99e32671f962] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  1 09:24:10 compute-0 nova_compute[189491]: 2025-12-01 09:24:10.041 189495 DEBUG nova.compute.manager [None req-5a663ef5-aa97-4eb3-871f-7aca50005b00 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 350d2bc4-8489-4a5a-991a-99e32671f962] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  1 09:24:10 compute-0 nova_compute[189491]: 2025-12-01 09:24:10.280 189495 DEBUG nova.compute.manager [None req-5a663ef5-aa97-4eb3-871f-7aca50005b00 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 350d2bc4-8489-4a5a-991a-99e32671f962] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  1 09:24:10 compute-0 nova_compute[189491]: 2025-12-01 09:24:10.283 189495 DEBUG nova.virt.libvirt.driver [None req-5a663ef5-aa97-4eb3-871f-7aca50005b00 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 350d2bc4-8489-4a5a-991a-99e32671f962] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  1 09:24:10 compute-0 nova_compute[189491]: 2025-12-01 09:24:10.284 189495 INFO nova.virt.libvirt.driver [None req-5a663ef5-aa97-4eb3-871f-7aca50005b00 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 350d2bc4-8489-4a5a-991a-99e32671f962] Creating image(s)#033[00m
Dec  1 09:24:10 compute-0 nova_compute[189491]: 2025-12-01 09:24:10.285 189495 DEBUG oslo_concurrency.lockutils [None req-5a663ef5-aa97-4eb3-871f-7aca50005b00 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Acquiring lock "/var/lib/nova/instances/350d2bc4-8489-4a5a-991a-99e32671f962/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:24:10 compute-0 nova_compute[189491]: 2025-12-01 09:24:10.286 189495 DEBUG oslo_concurrency.lockutils [None req-5a663ef5-aa97-4eb3-871f-7aca50005b00 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lock "/var/lib/nova/instances/350d2bc4-8489-4a5a-991a-99e32671f962/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:24:10 compute-0 nova_compute[189491]: 2025-12-01 09:24:10.288 189495 DEBUG oslo_concurrency.lockutils [None req-5a663ef5-aa97-4eb3-871f-7aca50005b00 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lock "/var/lib/nova/instances/350d2bc4-8489-4a5a-991a-99e32671f962/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:24:10 compute-0 nova_compute[189491]: 2025-12-01 09:24:10.317 189495 DEBUG oslo_concurrency.processutils [None req-5a663ef5-aa97-4eb3-871f-7aca50005b00 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ace19d5a50ba51d9cdb8d0e36f5ab43d2c0f33b5 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:24:10 compute-0 nova_compute[189491]: 2025-12-01 09:24:10.400 189495 DEBUG oslo_concurrency.processutils [None req-5a663ef5-aa97-4eb3-871f-7aca50005b00 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ace19d5a50ba51d9cdb8d0e36f5ab43d2c0f33b5 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:24:10 compute-0 nova_compute[189491]: 2025-12-01 09:24:10.402 189495 DEBUG oslo_concurrency.lockutils [None req-5a663ef5-aa97-4eb3-871f-7aca50005b00 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Acquiring lock "ace19d5a50ba51d9cdb8d0e36f5ab43d2c0f33b5" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:24:10 compute-0 nova_compute[189491]: 2025-12-01 09:24:10.403 189495 DEBUG oslo_concurrency.lockutils [None req-5a663ef5-aa97-4eb3-871f-7aca50005b00 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lock "ace19d5a50ba51d9cdb8d0e36f5ab43d2c0f33b5" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:24:10 compute-0 nova_compute[189491]: 2025-12-01 09:24:10.427 189495 DEBUG oslo_concurrency.processutils [None req-5a663ef5-aa97-4eb3-871f-7aca50005b00 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ace19d5a50ba51d9cdb8d0e36f5ab43d2c0f33b5 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:24:10 compute-0 nova_compute[189491]: 2025-12-01 09:24:10.493 189495 DEBUG oslo_concurrency.processutils [None req-5a663ef5-aa97-4eb3-871f-7aca50005b00 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ace19d5a50ba51d9cdb8d0e36f5ab43d2c0f33b5 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:24:10 compute-0 nova_compute[189491]: 2025-12-01 09:24:10.495 189495 DEBUG oslo_concurrency.processutils [None req-5a663ef5-aa97-4eb3-871f-7aca50005b00 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ace19d5a50ba51d9cdb8d0e36f5ab43d2c0f33b5,backing_fmt=raw /var/lib/nova/instances/350d2bc4-8489-4a5a-991a-99e32671f962/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:24:10 compute-0 nova_compute[189491]: 2025-12-01 09:24:10.579 189495 DEBUG oslo_concurrency.processutils [None req-5a663ef5-aa97-4eb3-871f-7aca50005b00 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ace19d5a50ba51d9cdb8d0e36f5ab43d2c0f33b5,backing_fmt=raw /var/lib/nova/instances/350d2bc4-8489-4a5a-991a-99e32671f962/disk 1073741824" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:24:10 compute-0 nova_compute[189491]: 2025-12-01 09:24:10.581 189495 DEBUG oslo_concurrency.lockutils [None req-5a663ef5-aa97-4eb3-871f-7aca50005b00 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lock "ace19d5a50ba51d9cdb8d0e36f5ab43d2c0f33b5" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.178s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:24:10 compute-0 nova_compute[189491]: 2025-12-01 09:24:10.582 189495 DEBUG oslo_concurrency.processutils [None req-5a663ef5-aa97-4eb3-871f-7aca50005b00 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ace19d5a50ba51d9cdb8d0e36f5ab43d2c0f33b5 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:24:10 compute-0 nova_compute[189491]: 2025-12-01 09:24:10.668 189495 DEBUG oslo_concurrency.processutils [None req-5a663ef5-aa97-4eb3-871f-7aca50005b00 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ace19d5a50ba51d9cdb8d0e36f5ab43d2c0f33b5 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:24:10 compute-0 nova_compute[189491]: 2025-12-01 09:24:10.670 189495 DEBUG nova.virt.disk.api [None req-5a663ef5-aa97-4eb3-871f-7aca50005b00 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Checking if we can resize image /var/lib/nova/instances/350d2bc4-8489-4a5a-991a-99e32671f962/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Dec  1 09:24:10 compute-0 nova_compute[189491]: 2025-12-01 09:24:10.671 189495 DEBUG oslo_concurrency.processutils [None req-5a663ef5-aa97-4eb3-871f-7aca50005b00 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/350d2bc4-8489-4a5a-991a-99e32671f962/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:24:10 compute-0 nova_compute[189491]: 2025-12-01 09:24:10.734 189495 DEBUG oslo_concurrency.processutils [None req-5a663ef5-aa97-4eb3-871f-7aca50005b00 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/350d2bc4-8489-4a5a-991a-99e32671f962/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:24:10 compute-0 nova_compute[189491]: 2025-12-01 09:24:10.736 189495 DEBUG nova.virt.disk.api [None req-5a663ef5-aa97-4eb3-871f-7aca50005b00 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Cannot resize image /var/lib/nova/instances/350d2bc4-8489-4a5a-991a-99e32671f962/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Dec  1 09:24:10 compute-0 nova_compute[189491]: 2025-12-01 09:24:10.737 189495 DEBUG nova.objects.instance [None req-5a663ef5-aa97-4eb3-871f-7aca50005b00 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lazy-loading 'migration_context' on Instance uuid 350d2bc4-8489-4a5a-991a-99e32671f962 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 09:24:10 compute-0 nova_compute[189491]: 2025-12-01 09:24:10.797 189495 DEBUG oslo_concurrency.lockutils [None req-5a663ef5-aa97-4eb3-871f-7aca50005b00 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Acquiring lock "/var/lib/nova/instances/350d2bc4-8489-4a5a-991a-99e32671f962/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:24:10 compute-0 nova_compute[189491]: 2025-12-01 09:24:10.798 189495 DEBUG oslo_concurrency.lockutils [None req-5a663ef5-aa97-4eb3-871f-7aca50005b00 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lock "/var/lib/nova/instances/350d2bc4-8489-4a5a-991a-99e32671f962/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:24:10 compute-0 nova_compute[189491]: 2025-12-01 09:24:10.800 189495 DEBUG oslo_concurrency.lockutils [None req-5a663ef5-aa97-4eb3-871f-7aca50005b00 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lock "/var/lib/nova/instances/350d2bc4-8489-4a5a-991a-99e32671f962/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:24:10 compute-0 nova_compute[189491]: 2025-12-01 09:24:10.828 189495 DEBUG oslo_concurrency.processutils [None req-5a663ef5-aa97-4eb3-871f-7aca50005b00 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:24:10 compute-0 nova_compute[189491]: 2025-12-01 09:24:10.910 189495 DEBUG oslo_concurrency.processutils [None req-5a663ef5-aa97-4eb3-871f-7aca50005b00 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:24:10 compute-0 nova_compute[189491]: 2025-12-01 09:24:10.912 189495 DEBUG oslo_concurrency.lockutils [None req-5a663ef5-aa97-4eb3-871f-7aca50005b00 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:24:10 compute-0 nova_compute[189491]: 2025-12-01 09:24:10.913 189495 DEBUG oslo_concurrency.lockutils [None req-5a663ef5-aa97-4eb3-871f-7aca50005b00 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:24:10 compute-0 nova_compute[189491]: 2025-12-01 09:24:10.942 189495 DEBUG oslo_concurrency.processutils [None req-5a663ef5-aa97-4eb3-871f-7aca50005b00 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:24:11 compute-0 nova_compute[189491]: 2025-12-01 09:24:11.005 189495 DEBUG oslo_concurrency.processutils [None req-5a663ef5-aa97-4eb3-871f-7aca50005b00 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:24:11 compute-0 nova_compute[189491]: 2025-12-01 09:24:11.007 189495 DEBUG oslo_concurrency.processutils [None req-5a663ef5-aa97-4eb3-871f-7aca50005b00 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/350d2bc4-8489-4a5a-991a-99e32671f962/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:24:11 compute-0 nova_compute[189491]: 2025-12-01 09:24:11.067 189495 DEBUG oslo_concurrency.processutils [None req-5a663ef5-aa97-4eb3-871f-7aca50005b00 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/350d2bc4-8489-4a5a-991a-99e32671f962/disk.eph0 1073741824" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:24:11 compute-0 nova_compute[189491]: 2025-12-01 09:24:11.069 189495 DEBUG oslo_concurrency.lockutils [None req-5a663ef5-aa97-4eb3-871f-7aca50005b00 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.156s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:24:11 compute-0 nova_compute[189491]: 2025-12-01 09:24:11.070 189495 DEBUG oslo_concurrency.processutils [None req-5a663ef5-aa97-4eb3-871f-7aca50005b00 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:24:11 compute-0 nova_compute[189491]: 2025-12-01 09:24:11.142 189495 DEBUG oslo_concurrency.processutils [None req-5a663ef5-aa97-4eb3-871f-7aca50005b00 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:24:11 compute-0 nova_compute[189491]: 2025-12-01 09:24:11.143 189495 DEBUG nova.virt.libvirt.driver [None req-5a663ef5-aa97-4eb3-871f-7aca50005b00 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 350d2bc4-8489-4a5a-991a-99e32671f962] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  1 09:24:11 compute-0 nova_compute[189491]: 2025-12-01 09:24:11.144 189495 DEBUG nova.virt.libvirt.driver [None req-5a663ef5-aa97-4eb3-871f-7aca50005b00 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 350d2bc4-8489-4a5a-991a-99e32671f962] Ensure instance console log exists: /var/lib/nova/instances/350d2bc4-8489-4a5a-991a-99e32671f962/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  1 09:24:11 compute-0 nova_compute[189491]: 2025-12-01 09:24:11.145 189495 DEBUG oslo_concurrency.lockutils [None req-5a663ef5-aa97-4eb3-871f-7aca50005b00 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:24:11 compute-0 nova_compute[189491]: 2025-12-01 09:24:11.146 189495 DEBUG oslo_concurrency.lockutils [None req-5a663ef5-aa97-4eb3-871f-7aca50005b00 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:24:11 compute-0 nova_compute[189491]: 2025-12-01 09:24:11.146 189495 DEBUG oslo_concurrency.lockutils [None req-5a663ef5-aa97-4eb3-871f-7aca50005b00 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:24:12 compute-0 nova_compute[189491]: 2025-12-01 09:24:12.442 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:24:13 compute-0 nova_compute[189491]: 2025-12-01 09:24:13.522 189495 DEBUG nova.network.neutron [None req-5a663ef5-aa97-4eb3-871f-7aca50005b00 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 350d2bc4-8489-4a5a-991a-99e32671f962] Successfully updated port: a79ae82e-bfbc-4718-a23a-6d99c6057e19 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  1 09:24:13 compute-0 nova_compute[189491]: 2025-12-01 09:24:13.541 189495 DEBUG oslo_concurrency.lockutils [None req-5a663ef5-aa97-4eb3-871f-7aca50005b00 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Acquiring lock "refresh_cache-350d2bc4-8489-4a5a-991a-99e32671f962" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 09:24:13 compute-0 nova_compute[189491]: 2025-12-01 09:24:13.542 189495 DEBUG oslo_concurrency.lockutils [None req-5a663ef5-aa97-4eb3-871f-7aca50005b00 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Acquired lock "refresh_cache-350d2bc4-8489-4a5a-991a-99e32671f962" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 09:24:13 compute-0 nova_compute[189491]: 2025-12-01 09:24:13.542 189495 DEBUG nova.network.neutron [None req-5a663ef5-aa97-4eb3-871f-7aca50005b00 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 350d2bc4-8489-4a5a-991a-99e32671f962] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  1 09:24:13 compute-0 nova_compute[189491]: 2025-12-01 09:24:13.634 189495 DEBUG nova.compute.manager [req-f3a18a3e-b548-43bd-a5ac-195cc07d9902 req-0f45f0e3-e28c-40ad-b0ef-e6abf3b615c4 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 350d2bc4-8489-4a5a-991a-99e32671f962] Received event network-changed-a79ae82e-bfbc-4718-a23a-6d99c6057e19 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 09:24:13 compute-0 nova_compute[189491]: 2025-12-01 09:24:13.634 189495 DEBUG nova.compute.manager [req-f3a18a3e-b548-43bd-a5ac-195cc07d9902 req-0f45f0e3-e28c-40ad-b0ef-e6abf3b615c4 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 350d2bc4-8489-4a5a-991a-99e32671f962] Refreshing instance network info cache due to event network-changed-a79ae82e-bfbc-4718-a23a-6d99c6057e19. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  1 09:24:13 compute-0 nova_compute[189491]: 2025-12-01 09:24:13.634 189495 DEBUG oslo_concurrency.lockutils [req-f3a18a3e-b548-43bd-a5ac-195cc07d9902 req-0f45f0e3-e28c-40ad-b0ef-e6abf3b615c4 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Acquiring lock "refresh_cache-350d2bc4-8489-4a5a-991a-99e32671f962" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 09:24:14 compute-0 nova_compute[189491]: 2025-12-01 09:24:14.463 189495 DEBUG nova.network.neutron [None req-5a663ef5-aa97-4eb3-871f-7aca50005b00 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 350d2bc4-8489-4a5a-991a-99e32671f962] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  1 09:24:14 compute-0 nova_compute[189491]: 2025-12-01 09:24:14.513 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:24:14 compute-0 podman[242981]: 2025-12-01 09:24:14.727858611 +0000 UTC m=+0.089926187 container health_status ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  1 09:24:14 compute-0 podman[242980]: 2025-12-01 09:24:14.742376312 +0000 UTC m=+0.103132357 container health_status 6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  1 09:24:15 compute-0 nova_compute[189491]: 2025-12-01 09:24:15.688 189495 DEBUG nova.network.neutron [None req-5a663ef5-aa97-4eb3-871f-7aca50005b00 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 350d2bc4-8489-4a5a-991a-99e32671f962] Updating instance_info_cache with network_info: [{"id": "a79ae82e-bfbc-4718-a23a-6d99c6057e19", "address": "fa:16:3e:da:68:61", "network": {"id": "52d15875-2a2e-463a-bc5d-8fa6b8466bff", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.209", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fac95b8a995a4174bfa966a8d9d9aa01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa79ae82e-bf", "ovs_interfaceid": "a79ae82e-bfbc-4718-a23a-6d99c6057e19", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 09:24:15 compute-0 nova_compute[189491]: 2025-12-01 09:24:15.710 189495 DEBUG oslo_concurrency.lockutils [None req-5a663ef5-aa97-4eb3-871f-7aca50005b00 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Releasing lock "refresh_cache-350d2bc4-8489-4a5a-991a-99e32671f962" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 09:24:15 compute-0 nova_compute[189491]: 2025-12-01 09:24:15.711 189495 DEBUG nova.compute.manager [None req-5a663ef5-aa97-4eb3-871f-7aca50005b00 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 350d2bc4-8489-4a5a-991a-99e32671f962] Instance network_info: |[{"id": "a79ae82e-bfbc-4718-a23a-6d99c6057e19", "address": "fa:16:3e:da:68:61", "network": {"id": "52d15875-2a2e-463a-bc5d-8fa6b8466bff", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.209", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fac95b8a995a4174bfa966a8d9d9aa01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa79ae82e-bf", "ovs_interfaceid": "a79ae82e-bfbc-4718-a23a-6d99c6057e19", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  1 09:24:15 compute-0 nova_compute[189491]: 2025-12-01 09:24:15.711 189495 DEBUG oslo_concurrency.lockutils [req-f3a18a3e-b548-43bd-a5ac-195cc07d9902 req-0f45f0e3-e28c-40ad-b0ef-e6abf3b615c4 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Acquired lock "refresh_cache-350d2bc4-8489-4a5a-991a-99e32671f962" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 09:24:15 compute-0 nova_compute[189491]: 2025-12-01 09:24:15.712 189495 DEBUG nova.network.neutron [req-f3a18a3e-b548-43bd-a5ac-195cc07d9902 req-0f45f0e3-e28c-40ad-b0ef-e6abf3b615c4 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 350d2bc4-8489-4a5a-991a-99e32671f962] Refreshing network info cache for port a79ae82e-bfbc-4718-a23a-6d99c6057e19 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  1 09:24:15 compute-0 nova_compute[189491]: 2025-12-01 09:24:15.716 189495 DEBUG nova.virt.libvirt.driver [None req-5a663ef5-aa97-4eb3-871f-7aca50005b00 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 350d2bc4-8489-4a5a-991a-99e32671f962] Start _get_guest_xml network_info=[{"id": "a79ae82e-bfbc-4718-a23a-6d99c6057e19", "address": "fa:16:3e:da:68:61", "network": {"id": "52d15875-2a2e-463a-bc5d-8fa6b8466bff", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.209", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fac95b8a995a4174bfa966a8d9d9aa01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa79ae82e-bf", "ovs_interfaceid": "a79ae82e-bfbc-4718-a23a-6d99c6057e19", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-01T09:15:08Z,direct_url=<?>,disk_format='qcow2',id=304c689d-2799-45ae-8166-517d5fd107b2,min_disk=0,min_ram=0,name='cirros',owner='fac95b8a995a4174bfa966a8d9d9aa01',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-01T09:15:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'guest_format': None, 'encryption_options': None, 'device_type': 'disk', 'encryption_secret_uuid': None, 'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'image_id': '304c689d-2799-45ae-8166-517d5fd107b2'}], 'ephemerals': [{'size': 1, 'encrypted': False, 'guest_format': None, 'encryption_options': None, 'device_type': 'disk', 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'device_name': '/dev/vdb', 'encryption_format': None}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  1 09:24:15 compute-0 nova_compute[189491]: 2025-12-01 09:24:15.726 189495 WARNING nova.virt.libvirt.driver [None req-5a663ef5-aa97-4eb3-871f-7aca50005b00 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 09:24:15 compute-0 nova_compute[189491]: 2025-12-01 09:24:15.740 189495 DEBUG nova.virt.libvirt.host [None req-5a663ef5-aa97-4eb3-871f-7aca50005b00 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  1 09:24:15 compute-0 nova_compute[189491]: 2025-12-01 09:24:15.741 189495 DEBUG nova.virt.libvirt.host [None req-5a663ef5-aa97-4eb3-871f-7aca50005b00 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  1 09:24:15 compute-0 nova_compute[189491]: 2025-12-01 09:24:15.747 189495 DEBUG nova.virt.libvirt.host [None req-5a663ef5-aa97-4eb3-871f-7aca50005b00 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  1 09:24:15 compute-0 nova_compute[189491]: 2025-12-01 09:24:15.748 189495 DEBUG nova.virt.libvirt.host [None req-5a663ef5-aa97-4eb3-871f-7aca50005b00 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  1 09:24:15 compute-0 nova_compute[189491]: 2025-12-01 09:24:15.749 189495 DEBUG nova.virt.libvirt.driver [None req-5a663ef5-aa97-4eb3-871f-7aca50005b00 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  1 09:24:15 compute-0 nova_compute[189491]: 2025-12-01 09:24:15.750 189495 DEBUG nova.virt.hardware [None req-5a663ef5-aa97-4eb3-871f-7aca50005b00 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-01T09:15:13Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='719a52fe-7f4b-48c0-b9dc-6a91d4ec600c',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-01T09:15:08Z,direct_url=<?>,disk_format='qcow2',id=304c689d-2799-45ae-8166-517d5fd107b2,min_disk=0,min_ram=0,name='cirros',owner='fac95b8a995a4174bfa966a8d9d9aa01',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-01T09:15:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  1 09:24:15 compute-0 nova_compute[189491]: 2025-12-01 09:24:15.751 189495 DEBUG nova.virt.hardware [None req-5a663ef5-aa97-4eb3-871f-7aca50005b00 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  1 09:24:15 compute-0 nova_compute[189491]: 2025-12-01 09:24:15.751 189495 DEBUG nova.virt.hardware [None req-5a663ef5-aa97-4eb3-871f-7aca50005b00 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  1 09:24:15 compute-0 nova_compute[189491]: 2025-12-01 09:24:15.752 189495 DEBUG nova.virt.hardware [None req-5a663ef5-aa97-4eb3-871f-7aca50005b00 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  1 09:24:15 compute-0 nova_compute[189491]: 2025-12-01 09:24:15.752 189495 DEBUG nova.virt.hardware [None req-5a663ef5-aa97-4eb3-871f-7aca50005b00 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  1 09:24:15 compute-0 nova_compute[189491]: 2025-12-01 09:24:15.753 189495 DEBUG nova.virt.hardware [None req-5a663ef5-aa97-4eb3-871f-7aca50005b00 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  1 09:24:15 compute-0 nova_compute[189491]: 2025-12-01 09:24:15.753 189495 DEBUG nova.virt.hardware [None req-5a663ef5-aa97-4eb3-871f-7aca50005b00 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  1 09:24:15 compute-0 nova_compute[189491]: 2025-12-01 09:24:15.754 189495 DEBUG nova.virt.hardware [None req-5a663ef5-aa97-4eb3-871f-7aca50005b00 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  1 09:24:15 compute-0 nova_compute[189491]: 2025-12-01 09:24:15.755 189495 DEBUG nova.virt.hardware [None req-5a663ef5-aa97-4eb3-871f-7aca50005b00 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  1 09:24:15 compute-0 nova_compute[189491]: 2025-12-01 09:24:15.755 189495 DEBUG nova.virt.hardware [None req-5a663ef5-aa97-4eb3-871f-7aca50005b00 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  1 09:24:15 compute-0 nova_compute[189491]: 2025-12-01 09:24:15.756 189495 DEBUG nova.virt.hardware [None req-5a663ef5-aa97-4eb3-871f-7aca50005b00 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  1 09:24:15 compute-0 nova_compute[189491]: 2025-12-01 09:24:15.762 189495 DEBUG nova.virt.libvirt.vif [None req-5a663ef5-aa97-4eb3-871f-7aca50005b00 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-01T09:24:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-a75cfa3-5bcj5tw5woc6-eld5euc3zwia-vnf-qwzf3cpwxtqu',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-a75cfa3-5bcj5tw5woc6-eld5euc3zwia-vnf-qwzf3cpwxtqu',id=3,image_ref='304c689d-2799-45ae-8166-517d5fd107b2',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='1555a697-b0aa-4429-98e7-26e6671e228d'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fac95b8a995a4174bfa966a8d9d9aa01',ramdisk_id='',reservation_id='r-l4ia17ve',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member',image_base_image_ref='304c689d-2799-45ae-8166-517d5fd107b2',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-01T09:24:10Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0wOTgxOTkwMDIxNzU4MjQ0NDQwPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTA5ODE5OTAwMjE3NTgyNDQ0NDA9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09MDk4MTk5MDAyMTc1ODI0NDQ0MD09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTA5ODE5OTAwMjE3NTgyNDQ0NDA9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0wOTgxOTkwMDIxNzU4MjQ0NDQwPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0wOTgxOTkwMDIxNzU4MjQ0NDQwPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjI
Dec  1 09:24:15 compute-0 nova_compute[189491]: ywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09MDk4MTk5MDAyMTc1ODI0NDQ0MD09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTA5ODE5OTAwMjE3NTgyNDQ0NDA9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0wOTgxOTkwMDIxNzU4MjQ0NDQwPT0tLQo=',user_id='962a55152ff34fdda5eae1f8aee3a7a9',uuid=350d2bc4-8489-4a5a-991a-99e32671f962,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a79ae82e-bfbc-4718-a23a-6d99c6057e19", "address": "fa:16:3e:da:68:61", "network": {"id": "52d15875-2a2e-463a-bc5d-8fa6b8466bff", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.209", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fac95b8a995a4174bfa966a8d9d9aa01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa79ae82e-bf", "ovs_interfaceid": "a79ae82e-bfbc-4718-a23a-6d99c6057e19", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  1 09:24:15 compute-0 nova_compute[189491]: 2025-12-01 09:24:15.763 189495 DEBUG nova.network.os_vif_util [None req-5a663ef5-aa97-4eb3-871f-7aca50005b00 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Converting VIF {"id": "a79ae82e-bfbc-4718-a23a-6d99c6057e19", "address": "fa:16:3e:da:68:61", "network": {"id": "52d15875-2a2e-463a-bc5d-8fa6b8466bff", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.209", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fac95b8a995a4174bfa966a8d9d9aa01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa79ae82e-bf", "ovs_interfaceid": "a79ae82e-bfbc-4718-a23a-6d99c6057e19", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 09:24:15 compute-0 nova_compute[189491]: 2025-12-01 09:24:15.764 189495 DEBUG nova.network.os_vif_util [None req-5a663ef5-aa97-4eb3-871f-7aca50005b00 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:da:68:61,bridge_name='br-int',has_traffic_filtering=True,id=a79ae82e-bfbc-4718-a23a-6d99c6057e19,network=Network(52d15875-2a2e-463a-bc5d-8fa6b8466bff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapa79ae82e-bf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 09:24:15 compute-0 nova_compute[189491]: 2025-12-01 09:24:15.765 189495 DEBUG nova.objects.instance [None req-5a663ef5-aa97-4eb3-871f-7aca50005b00 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lazy-loading 'pci_devices' on Instance uuid 350d2bc4-8489-4a5a-991a-99e32671f962 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 09:24:15 compute-0 nova_compute[189491]: 2025-12-01 09:24:15.779 189495 DEBUG nova.virt.libvirt.driver [None req-5a663ef5-aa97-4eb3-871f-7aca50005b00 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 350d2bc4-8489-4a5a-991a-99e32671f962] End _get_guest_xml xml=<domain type="kvm">
Dec  1 09:24:15 compute-0 nova_compute[189491]:  <uuid>350d2bc4-8489-4a5a-991a-99e32671f962</uuid>
Dec  1 09:24:15 compute-0 nova_compute[189491]:  <name>instance-00000003</name>
Dec  1 09:24:15 compute-0 nova_compute[189491]:  <memory>524288</memory>
Dec  1 09:24:15 compute-0 nova_compute[189491]:  <vcpu>1</vcpu>
Dec  1 09:24:15 compute-0 nova_compute[189491]:  <metadata>
Dec  1 09:24:15 compute-0 nova_compute[189491]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  1 09:24:15 compute-0 nova_compute[189491]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  1 09:24:15 compute-0 nova_compute[189491]:      <nova:name>vn-a75cfa3-5bcj5tw5woc6-eld5euc3zwia-vnf-qwzf3cpwxtqu</nova:name>
Dec  1 09:24:15 compute-0 nova_compute[189491]:      <nova:creationTime>2025-12-01 09:24:15</nova:creationTime>
Dec  1 09:24:15 compute-0 nova_compute[189491]:      <nova:flavor name="m1.small">
Dec  1 09:24:15 compute-0 nova_compute[189491]:        <nova:memory>512</nova:memory>
Dec  1 09:24:15 compute-0 nova_compute[189491]:        <nova:disk>1</nova:disk>
Dec  1 09:24:15 compute-0 nova_compute[189491]:        <nova:swap>0</nova:swap>
Dec  1 09:24:15 compute-0 nova_compute[189491]:        <nova:ephemeral>1</nova:ephemeral>
Dec  1 09:24:15 compute-0 nova_compute[189491]:        <nova:vcpus>1</nova:vcpus>
Dec  1 09:24:15 compute-0 nova_compute[189491]:      </nova:flavor>
Dec  1 09:24:15 compute-0 nova_compute[189491]:      <nova:owner>
Dec  1 09:24:15 compute-0 nova_compute[189491]:        <nova:user uuid="962a55152ff34fdda5eae1f8aee3a7a9">admin</nova:user>
Dec  1 09:24:15 compute-0 nova_compute[189491]:        <nova:project uuid="fac95b8a995a4174bfa966a8d9d9aa01">admin</nova:project>
Dec  1 09:24:15 compute-0 nova_compute[189491]:      </nova:owner>
Dec  1 09:24:15 compute-0 nova_compute[189491]:      <nova:root type="image" uuid="304c689d-2799-45ae-8166-517d5fd107b2"/>
Dec  1 09:24:15 compute-0 nova_compute[189491]:      <nova:ports>
Dec  1 09:24:15 compute-0 nova_compute[189491]:        <nova:port uuid="a79ae82e-bfbc-4718-a23a-6d99c6057e19">
Dec  1 09:24:15 compute-0 nova_compute[189491]:          <nova:ip type="fixed" address="192.168.0.209" ipVersion="4"/>
Dec  1 09:24:15 compute-0 nova_compute[189491]:        </nova:port>
Dec  1 09:24:15 compute-0 nova_compute[189491]:      </nova:ports>
Dec  1 09:24:15 compute-0 nova_compute[189491]:    </nova:instance>
Dec  1 09:24:15 compute-0 nova_compute[189491]:  </metadata>
Dec  1 09:24:15 compute-0 nova_compute[189491]:  <sysinfo type="smbios">
Dec  1 09:24:15 compute-0 nova_compute[189491]:    <system>
Dec  1 09:24:15 compute-0 nova_compute[189491]:      <entry name="manufacturer">RDO</entry>
Dec  1 09:24:15 compute-0 nova_compute[189491]:      <entry name="product">OpenStack Compute</entry>
Dec  1 09:24:15 compute-0 nova_compute[189491]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  1 09:24:15 compute-0 nova_compute[189491]:      <entry name="serial">350d2bc4-8489-4a5a-991a-99e32671f962</entry>
Dec  1 09:24:15 compute-0 nova_compute[189491]:      <entry name="uuid">350d2bc4-8489-4a5a-991a-99e32671f962</entry>
Dec  1 09:24:15 compute-0 nova_compute[189491]:      <entry name="family">Virtual Machine</entry>
Dec  1 09:24:15 compute-0 nova_compute[189491]:    </system>
Dec  1 09:24:15 compute-0 nova_compute[189491]:  </sysinfo>
Dec  1 09:24:15 compute-0 nova_compute[189491]:  <os>
Dec  1 09:24:15 compute-0 nova_compute[189491]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  1 09:24:15 compute-0 nova_compute[189491]:    <boot dev="hd"/>
Dec  1 09:24:15 compute-0 nova_compute[189491]:    <smbios mode="sysinfo"/>
Dec  1 09:24:15 compute-0 nova_compute[189491]:  </os>
Dec  1 09:24:15 compute-0 nova_compute[189491]:  <features>
Dec  1 09:24:15 compute-0 nova_compute[189491]:    <acpi/>
Dec  1 09:24:15 compute-0 nova_compute[189491]:    <apic/>
Dec  1 09:24:15 compute-0 nova_compute[189491]:    <vmcoreinfo/>
Dec  1 09:24:15 compute-0 nova_compute[189491]:  </features>
Dec  1 09:24:15 compute-0 nova_compute[189491]:  <clock offset="utc">
Dec  1 09:24:15 compute-0 nova_compute[189491]:    <timer name="pit" tickpolicy="delay"/>
Dec  1 09:24:15 compute-0 nova_compute[189491]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  1 09:24:15 compute-0 nova_compute[189491]:    <timer name="hpet" present="no"/>
Dec  1 09:24:15 compute-0 nova_compute[189491]:  </clock>
Dec  1 09:24:15 compute-0 nova_compute[189491]:  <cpu mode="host-model" match="exact">
Dec  1 09:24:15 compute-0 nova_compute[189491]:    <topology sockets="1" cores="1" threads="1"/>
Dec  1 09:24:15 compute-0 nova_compute[189491]:  </cpu>
Dec  1 09:24:15 compute-0 nova_compute[189491]:  <devices>
Dec  1 09:24:15 compute-0 nova_compute[189491]:    <disk type="file" device="disk">
Dec  1 09:24:15 compute-0 nova_compute[189491]:      <driver name="qemu" type="qcow2" cache="none"/>
Dec  1 09:24:15 compute-0 nova_compute[189491]:      <source file="/var/lib/nova/instances/350d2bc4-8489-4a5a-991a-99e32671f962/disk"/>
Dec  1 09:24:15 compute-0 nova_compute[189491]:      <target dev="vda" bus="virtio"/>
Dec  1 09:24:15 compute-0 nova_compute[189491]:    </disk>
Dec  1 09:24:15 compute-0 nova_compute[189491]:    <disk type="file" device="disk">
Dec  1 09:24:15 compute-0 nova_compute[189491]:      <driver name="qemu" type="qcow2" cache="none"/>
Dec  1 09:24:15 compute-0 nova_compute[189491]:      <source file="/var/lib/nova/instances/350d2bc4-8489-4a5a-991a-99e32671f962/disk.eph0"/>
Dec  1 09:24:15 compute-0 nova_compute[189491]:      <target dev="vdb" bus="virtio"/>
Dec  1 09:24:15 compute-0 nova_compute[189491]:    </disk>
Dec  1 09:24:15 compute-0 nova_compute[189491]:    <disk type="file" device="cdrom">
Dec  1 09:24:15 compute-0 nova_compute[189491]:      <driver name="qemu" type="raw" cache="none"/>
Dec  1 09:24:15 compute-0 nova_compute[189491]:      <source file="/var/lib/nova/instances/350d2bc4-8489-4a5a-991a-99e32671f962/disk.config"/>
Dec  1 09:24:15 compute-0 nova_compute[189491]:      <target dev="sda" bus="sata"/>
Dec  1 09:24:15 compute-0 nova_compute[189491]:    </disk>
Dec  1 09:24:15 compute-0 nova_compute[189491]:    <interface type="ethernet">
Dec  1 09:24:15 compute-0 nova_compute[189491]:      <mac address="fa:16:3e:da:68:61"/>
Dec  1 09:24:15 compute-0 nova_compute[189491]:      <model type="virtio"/>
Dec  1 09:24:15 compute-0 nova_compute[189491]:      <driver name="vhost" rx_queue_size="512"/>
Dec  1 09:24:15 compute-0 nova_compute[189491]:      <mtu size="1442"/>
Dec  1 09:24:15 compute-0 nova_compute[189491]:      <target dev="tapa79ae82e-bf"/>
Dec  1 09:24:15 compute-0 nova_compute[189491]:    </interface>
Dec  1 09:24:15 compute-0 nova_compute[189491]:    <serial type="pty">
Dec  1 09:24:15 compute-0 nova_compute[189491]:      <log file="/var/lib/nova/instances/350d2bc4-8489-4a5a-991a-99e32671f962/console.log" append="off"/>
Dec  1 09:24:15 compute-0 nova_compute[189491]:    </serial>
Dec  1 09:24:15 compute-0 nova_compute[189491]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  1 09:24:15 compute-0 nova_compute[189491]:    <video>
Dec  1 09:24:15 compute-0 nova_compute[189491]:      <model type="virtio"/>
Dec  1 09:24:15 compute-0 nova_compute[189491]:    </video>
Dec  1 09:24:15 compute-0 nova_compute[189491]:    <input type="tablet" bus="usb"/>
Dec  1 09:24:15 compute-0 nova_compute[189491]:    <rng model="virtio">
Dec  1 09:24:15 compute-0 nova_compute[189491]:      <backend model="random">/dev/urandom</backend>
Dec  1 09:24:15 compute-0 nova_compute[189491]:    </rng>
Dec  1 09:24:15 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root"/>
Dec  1 09:24:15 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:24:15 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:24:15 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:24:15 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:24:15 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:24:15 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:24:15 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:24:15 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:24:15 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:24:15 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:24:15 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:24:15 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:24:15 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:24:15 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:24:15 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:24:15 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:24:15 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:24:15 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:24:15 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:24:15 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:24:15 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:24:15 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:24:15 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:24:15 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:24:15 compute-0 nova_compute[189491]:    <controller type="usb" index="0"/>
Dec  1 09:24:15 compute-0 nova_compute[189491]:    <memballoon model="virtio">
Dec  1 09:24:15 compute-0 nova_compute[189491]:      <stats period="10"/>
Dec  1 09:24:15 compute-0 nova_compute[189491]:    </memballoon>
Dec  1 09:24:15 compute-0 nova_compute[189491]:  </devices>
Dec  1 09:24:15 compute-0 nova_compute[189491]: </domain>
Dec  1 09:24:15 compute-0 nova_compute[189491]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  1 09:24:15 compute-0 nova_compute[189491]: 2025-12-01 09:24:15.781 189495 DEBUG nova.compute.manager [None req-5a663ef5-aa97-4eb3-871f-7aca50005b00 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 350d2bc4-8489-4a5a-991a-99e32671f962] Preparing to wait for external event network-vif-plugged-a79ae82e-bfbc-4718-a23a-6d99c6057e19 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  1 09:24:15 compute-0 nova_compute[189491]: 2025-12-01 09:24:15.781 189495 DEBUG oslo_concurrency.lockutils [None req-5a663ef5-aa97-4eb3-871f-7aca50005b00 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Acquiring lock "350d2bc4-8489-4a5a-991a-99e32671f962-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:24:15 compute-0 nova_compute[189491]: 2025-12-01 09:24:15.781 189495 DEBUG oslo_concurrency.lockutils [None req-5a663ef5-aa97-4eb3-871f-7aca50005b00 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lock "350d2bc4-8489-4a5a-991a-99e32671f962-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:24:15 compute-0 nova_compute[189491]: 2025-12-01 09:24:15.782 189495 DEBUG oslo_concurrency.lockutils [None req-5a663ef5-aa97-4eb3-871f-7aca50005b00 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lock "350d2bc4-8489-4a5a-991a-99e32671f962-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:24:15 compute-0 nova_compute[189491]: 2025-12-01 09:24:15.783 189495 DEBUG nova.virt.libvirt.vif [None req-5a663ef5-aa97-4eb3-871f-7aca50005b00 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-01T09:24:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-a75cfa3-5bcj5tw5woc6-eld5euc3zwia-vnf-qwzf3cpwxtqu',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-a75cfa3-5bcj5tw5woc6-eld5euc3zwia-vnf-qwzf3cpwxtqu',id=3,image_ref='304c689d-2799-45ae-8166-517d5fd107b2',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='1555a697-b0aa-4429-98e7-26e6671e228d'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fac95b8a995a4174bfa966a8d9d9aa01',ramdisk_id='',reservation_id='r-l4ia17ve',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member',image_base_image_ref='304c689d-2799-45ae-8166-517d5fd107b2',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-01T09:24:10Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0wOTgxOTkwMDIxNzU4MjQ0NDQwPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTA5ODE5OTAwMjE3NTgyNDQ0NDA9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09MDk4MTk5MDAyMTc1ODI0NDQ0MD09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTA5ODE5OTAwMjE3NTgyNDQ0NDA9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0wOTgxOTkwMDIxNzU4MjQ0NDQwPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0wOTgxOTkwMDIxNzU4MjQ0NDQwPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJ
Dec  1 09:24:15 compute-0 nova_compute[189491]: wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09MDk4MTk5MDAyMTc1ODI0NDQ0MD09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTA5ODE5OTAwMjE3NTgyNDQ0NDA9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0wOTgxOTkwMDIxNzU4MjQ0NDQwPT0tLQo=',user_id='962a55152ff34fdda5eae1f8aee3a7a9',uuid=350d2bc4-8489-4a5a-991a-99e32671f962,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a79ae82e-bfbc-4718-a23a-6d99c6057e19", "address": "fa:16:3e:da:68:61", "network": {"id": "52d15875-2a2e-463a-bc5d-8fa6b8466bff", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.209", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fac95b8a995a4174bfa966a8d9d9aa01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa79ae82e-bf", "ovs_interfaceid": "a79ae82e-bfbc-4718-a23a-6d99c6057e19", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  1 09:24:15 compute-0 nova_compute[189491]: 2025-12-01 09:24:15.783 189495 DEBUG nova.network.os_vif_util [None req-5a663ef5-aa97-4eb3-871f-7aca50005b00 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Converting VIF {"id": "a79ae82e-bfbc-4718-a23a-6d99c6057e19", "address": "fa:16:3e:da:68:61", "network": {"id": "52d15875-2a2e-463a-bc5d-8fa6b8466bff", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.209", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fac95b8a995a4174bfa966a8d9d9aa01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa79ae82e-bf", "ovs_interfaceid": "a79ae82e-bfbc-4718-a23a-6d99c6057e19", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 09:24:15 compute-0 nova_compute[189491]: 2025-12-01 09:24:15.784 189495 DEBUG nova.network.os_vif_util [None req-5a663ef5-aa97-4eb3-871f-7aca50005b00 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:da:68:61,bridge_name='br-int',has_traffic_filtering=True,id=a79ae82e-bfbc-4718-a23a-6d99c6057e19,network=Network(52d15875-2a2e-463a-bc5d-8fa6b8466bff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapa79ae82e-bf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 09:24:15 compute-0 nova_compute[189491]: 2025-12-01 09:24:15.784 189495 DEBUG os_vif [None req-5a663ef5-aa97-4eb3-871f-7aca50005b00 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:da:68:61,bridge_name='br-int',has_traffic_filtering=True,id=a79ae82e-bfbc-4718-a23a-6d99c6057e19,network=Network(52d15875-2a2e-463a-bc5d-8fa6b8466bff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapa79ae82e-bf') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  1 09:24:15 compute-0 nova_compute[189491]: 2025-12-01 09:24:15.785 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:24:15 compute-0 nova_compute[189491]: 2025-12-01 09:24:15.785 189495 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:24:15 compute-0 nova_compute[189491]: 2025-12-01 09:24:15.786 189495 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 09:24:15 compute-0 nova_compute[189491]: 2025-12-01 09:24:15.789 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:24:15 compute-0 nova_compute[189491]: 2025-12-01 09:24:15.790 189495 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa79ae82e-bf, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:24:15 compute-0 nova_compute[189491]: 2025-12-01 09:24:15.790 189495 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa79ae82e-bf, col_values=(('external_ids', {'iface-id': 'a79ae82e-bfbc-4718-a23a-6d99c6057e19', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:da:68:61', 'vm-uuid': '350d2bc4-8489-4a5a-991a-99e32671f962'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:24:15 compute-0 nova_compute[189491]: 2025-12-01 09:24:15.792 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:24:15 compute-0 NetworkManager[56318]: <info>  [1764581055.7935] manager: (tapa79ae82e-bf): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/29)
Dec  1 09:24:15 compute-0 nova_compute[189491]: 2025-12-01 09:24:15.794 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  1 09:24:15 compute-0 nova_compute[189491]: 2025-12-01 09:24:15.802 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:24:15 compute-0 nova_compute[189491]: 2025-12-01 09:24:15.804 189495 INFO os_vif [None req-5a663ef5-aa97-4eb3-871f-7aca50005b00 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:da:68:61,bridge_name='br-int',has_traffic_filtering=True,id=a79ae82e-bfbc-4718-a23a-6d99c6057e19,network=Network(52d15875-2a2e-463a-bc5d-8fa6b8466bff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapa79ae82e-bf')#033[00m
Dec  1 09:24:15 compute-0 nova_compute[189491]: 2025-12-01 09:24:15.857 189495 DEBUG nova.virt.libvirt.driver [None req-5a663ef5-aa97-4eb3-871f-7aca50005b00 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  1 09:24:15 compute-0 nova_compute[189491]: 2025-12-01 09:24:15.857 189495 DEBUG nova.virt.libvirt.driver [None req-5a663ef5-aa97-4eb3-871f-7aca50005b00 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  1 09:24:15 compute-0 nova_compute[189491]: 2025-12-01 09:24:15.857 189495 DEBUG nova.virt.libvirt.driver [None req-5a663ef5-aa97-4eb3-871f-7aca50005b00 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  1 09:24:15 compute-0 nova_compute[189491]: 2025-12-01 09:24:15.858 189495 DEBUG nova.virt.libvirt.driver [None req-5a663ef5-aa97-4eb3-871f-7aca50005b00 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] No VIF found with MAC fa:16:3e:da:68:61, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  1 09:24:15 compute-0 nova_compute[189491]: 2025-12-01 09:24:15.858 189495 INFO nova.virt.libvirt.driver [None req-5a663ef5-aa97-4eb3-871f-7aca50005b00 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 350d2bc4-8489-4a5a-991a-99e32671f962] Using config drive#033[00m
Dec  1 09:24:16 compute-0 rsyslogd[236849]: message too long (8192) with configured size 8096, begin of message is: 2025-12-01 09:24:15.762 189495 DEBUG nova.virt.libvirt.vif [None req-5a663ef5-aa [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec  1 09:24:16 compute-0 rsyslogd[236849]: message too long (8192) with configured size 8096, begin of message is: 2025-12-01 09:24:15.783 189495 DEBUG nova.virt.libvirt.vif [None req-5a663ef5-aa [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec  1 09:24:16 compute-0 nova_compute[189491]: 2025-12-01 09:24:16.585 189495 INFO nova.virt.libvirt.driver [None req-5a663ef5-aa97-4eb3-871f-7aca50005b00 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 350d2bc4-8489-4a5a-991a-99e32671f962] Creating config drive at /var/lib/nova/instances/350d2bc4-8489-4a5a-991a-99e32671f962/disk.config#033[00m
Dec  1 09:24:16 compute-0 nova_compute[189491]: 2025-12-01 09:24:16.594 189495 DEBUG oslo_concurrency.processutils [None req-5a663ef5-aa97-4eb3-871f-7aca50005b00 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/350d2bc4-8489-4a5a-991a-99e32671f962/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpc0pcge22 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:24:16 compute-0 nova_compute[189491]: 2025-12-01 09:24:16.733 189495 DEBUG oslo_concurrency.processutils [None req-5a663ef5-aa97-4eb3-871f-7aca50005b00 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/350d2bc4-8489-4a5a-991a-99e32671f962/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpc0pcge22" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:24:16 compute-0 kernel: tapa79ae82e-bf: entered promiscuous mode
Dec  1 09:24:16 compute-0 NetworkManager[56318]: <info>  [1764581056.8214] manager: (tapa79ae82e-bf): new Tun device (/org/freedesktop/NetworkManager/Devices/30)
Dec  1 09:24:16 compute-0 ovn_controller[97794]: 2025-12-01T09:24:16Z|00040|binding|INFO|Claiming lport a79ae82e-bfbc-4718-a23a-6d99c6057e19 for this chassis.
Dec  1 09:24:16 compute-0 ovn_controller[97794]: 2025-12-01T09:24:16Z|00041|binding|INFO|a79ae82e-bfbc-4718-a23a-6d99c6057e19: Claiming fa:16:3e:da:68:61 192.168.0.209
Dec  1 09:24:16 compute-0 nova_compute[189491]: 2025-12-01 09:24:16.826 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:24:16 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:24:16.842 106659 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:da:68:61 192.168.0.209'], port_security=['fa:16:3e:da:68:61 192.168.0.209'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-vdfkxa75cfa3-5bcj5tw5woc6-eld5euc3zwia-port-76rbqcpmcvz3', 'neutron:cidrs': '192.168.0.209/24', 'neutron:device_id': '350d2bc4-8489-4a5a-991a-99e32671f962', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-52d15875-2a2e-463a-bc5d-8fa6b8466bff', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-vdfkxa75cfa3-5bcj5tw5woc6-eld5euc3zwia-port-76rbqcpmcvz3', 'neutron:project_id': 'fac95b8a995a4174bfa966a8d9d9aa01', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a5a5e6d4-6211-447f-b3f6-e2120ff69d87', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.197'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=260b7b6c-4405-41e2-9dc8-1595161adaf8, chassis=[<ovs.db.idl.Row object at 0x7ff7f12c3670>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff7f12c3670>], logical_port=a79ae82e-bfbc-4718-a23a-6d99c6057e19) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 09:24:16 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:24:16.843 106659 INFO neutron.agent.ovn.metadata.agent [-] Port a79ae82e-bfbc-4718-a23a-6d99c6057e19 in datapath 52d15875-2a2e-463a-bc5d-8fa6b8466bff bound to our chassis#033[00m
Dec  1 09:24:16 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:24:16.844 106659 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 52d15875-2a2e-463a-bc5d-8fa6b8466bff#033[00m
Dec  1 09:24:16 compute-0 ovn_controller[97794]: 2025-12-01T09:24:16Z|00042|binding|INFO|Setting lport a79ae82e-bfbc-4718-a23a-6d99c6057e19 ovn-installed in OVS
Dec  1 09:24:16 compute-0 ovn_controller[97794]: 2025-12-01T09:24:16Z|00043|binding|INFO|Setting lport a79ae82e-bfbc-4718-a23a-6d99c6057e19 up in Southbound
Dec  1 09:24:16 compute-0 nova_compute[189491]: 2025-12-01 09:24:16.852 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:24:16 compute-0 nova_compute[189491]: 2025-12-01 09:24:16.857 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:24:16 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:24:16.864 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[417dcfaa-e09e-414a-8808-b9b9afbff57e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:24:16 compute-0 systemd-udevd[243048]: Network interface NamePolicy= disabled on kernel command line.
Dec  1 09:24:16 compute-0 systemd-machined[155812]: New machine qemu-3-instance-00000003.
Dec  1 09:24:16 compute-0 NetworkManager[56318]: <info>  [1764581056.8943] device (tapa79ae82e-bf): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  1 09:24:16 compute-0 NetworkManager[56318]: <info>  [1764581056.8991] device (tapa79ae82e-bf): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  1 09:24:16 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:24:16.903 239843 DEBUG oslo.privsep.daemon [-] privsep: reply[81a51b6b-01ce-4296-8399-426c210d6680]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:24:16 compute-0 systemd[1]: Started Virtual Machine qemu-3-instance-00000003.
Dec  1 09:24:16 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:24:16.907 239843 DEBUG oslo.privsep.daemon [-] privsep: reply[5ee509b5-ce29-4b1c-ad16-badbd050b87a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:24:16 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:24:16.935 239843 DEBUG oslo.privsep.daemon [-] privsep: reply[06b41ef5-a02a-46a1-ab8e-4b6bb33aa8d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:24:16 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:24:16.961 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[2fa82399-9d56-47f5-bfd9-85dba28cc5b1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap52d15875-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d0:8c:a9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 7, 'tx_packets': 7, 'rx_bytes': 574, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 7, 'tx_packets': 7, 'rx_bytes': 574, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 383928, 'reachable_time': 21789, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 243055, 'error': None, 'target': 'ovnmeta-52d15875-2a2e-463a-bc5d-8fa6b8466bff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:24:16 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:24:16.980 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[dd315a40-cc3a-4804-9506-2948987cffde]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap52d15875-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 383943, 'tstamp': 383943}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 243059, 'error': None, 'target': 'ovnmeta-52d15875-2a2e-463a-bc5d-8fa6b8466bff', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap52d15875-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 383945, 'tstamp': 383945}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 243059, 'error': None, 'target': 'ovnmeta-52d15875-2a2e-463a-bc5d-8fa6b8466bff', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:24:16 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:24:16.984 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap52d15875-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:24:16 compute-0 nova_compute[189491]: 2025-12-01 09:24:16.986 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:24:16 compute-0 nova_compute[189491]: 2025-12-01 09:24:16.987 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:24:16 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:24:16.988 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap52d15875-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:24:16 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:24:16.989 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 09:24:16 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:24:16.989 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap52d15875-20, col_values=(('external_ids', {'iface-id': 'dbcd2eb8-9722-4ebb-b254-d57f599617d1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:24:16 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:24:16.990 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 09:24:17 compute-0 nova_compute[189491]: 2025-12-01 09:24:17.366 189495 DEBUG nova.virt.driver [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] Emitting event <LifecycleEvent: 1764581057.364834, 350d2bc4-8489-4a5a-991a-99e32671f962 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 09:24:17 compute-0 nova_compute[189491]: 2025-12-01 09:24:17.366 189495 INFO nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: 350d2bc4-8489-4a5a-991a-99e32671f962] VM Started (Lifecycle Event)#033[00m
Dec  1 09:24:17 compute-0 nova_compute[189491]: 2025-12-01 09:24:17.423 189495 DEBUG nova.compute.manager [req-85b42ee6-d844-4f2a-8974-f100e7f54157 req-529787f1-66f6-4d4a-bb09-26ade38248b7 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 350d2bc4-8489-4a5a-991a-99e32671f962] Received event network-vif-plugged-a79ae82e-bfbc-4718-a23a-6d99c6057e19 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 09:24:17 compute-0 nova_compute[189491]: 2025-12-01 09:24:17.424 189495 DEBUG oslo_concurrency.lockutils [req-85b42ee6-d844-4f2a-8974-f100e7f54157 req-529787f1-66f6-4d4a-bb09-26ade38248b7 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Acquiring lock "350d2bc4-8489-4a5a-991a-99e32671f962-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:24:17 compute-0 nova_compute[189491]: 2025-12-01 09:24:17.425 189495 DEBUG oslo_concurrency.lockutils [req-85b42ee6-d844-4f2a-8974-f100e7f54157 req-529787f1-66f6-4d4a-bb09-26ade38248b7 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Lock "350d2bc4-8489-4a5a-991a-99e32671f962-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:24:17 compute-0 nova_compute[189491]: 2025-12-01 09:24:17.425 189495 DEBUG oslo_concurrency.lockutils [req-85b42ee6-d844-4f2a-8974-f100e7f54157 req-529787f1-66f6-4d4a-bb09-26ade38248b7 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Lock "350d2bc4-8489-4a5a-991a-99e32671f962-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:24:17 compute-0 nova_compute[189491]: 2025-12-01 09:24:17.425 189495 DEBUG nova.compute.manager [req-85b42ee6-d844-4f2a-8974-f100e7f54157 req-529787f1-66f6-4d4a-bb09-26ade38248b7 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 350d2bc4-8489-4a5a-991a-99e32671f962] Processing event network-vif-plugged-a79ae82e-bfbc-4718-a23a-6d99c6057e19 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  1 09:24:17 compute-0 nova_compute[189491]: 2025-12-01 09:24:17.426 189495 DEBUG nova.compute.manager [None req-5a663ef5-aa97-4eb3-871f-7aca50005b00 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 350d2bc4-8489-4a5a-991a-99e32671f962] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  1 09:24:17 compute-0 nova_compute[189491]: 2025-12-01 09:24:17.442 189495 DEBUG nova.virt.libvirt.driver [None req-5a663ef5-aa97-4eb3-871f-7aca50005b00 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 350d2bc4-8489-4a5a-991a-99e32671f962] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  1 09:24:17 compute-0 nova_compute[189491]: 2025-12-01 09:24:17.444 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:24:17 compute-0 nova_compute[189491]: 2025-12-01 09:24:17.449 189495 INFO nova.virt.libvirt.driver [-] [instance: 350d2bc4-8489-4a5a-991a-99e32671f962] Instance spawned successfully.#033[00m
Dec  1 09:24:17 compute-0 nova_compute[189491]: 2025-12-01 09:24:17.449 189495 DEBUG nova.virt.libvirt.driver [None req-5a663ef5-aa97-4eb3-871f-7aca50005b00 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 350d2bc4-8489-4a5a-991a-99e32671f962] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  1 09:24:17 compute-0 nova_compute[189491]: 2025-12-01 09:24:17.489 189495 DEBUG nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: 350d2bc4-8489-4a5a-991a-99e32671f962] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 09:24:17 compute-0 nova_compute[189491]: 2025-12-01 09:24:17.496 189495 DEBUG nova.virt.libvirt.driver [None req-5a663ef5-aa97-4eb3-871f-7aca50005b00 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 350d2bc4-8489-4a5a-991a-99e32671f962] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 09:24:17 compute-0 nova_compute[189491]: 2025-12-01 09:24:17.496 189495 DEBUG nova.virt.libvirt.driver [None req-5a663ef5-aa97-4eb3-871f-7aca50005b00 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 350d2bc4-8489-4a5a-991a-99e32671f962] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 09:24:17 compute-0 nova_compute[189491]: 2025-12-01 09:24:17.497 189495 DEBUG nova.virt.libvirt.driver [None req-5a663ef5-aa97-4eb3-871f-7aca50005b00 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 350d2bc4-8489-4a5a-991a-99e32671f962] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 09:24:17 compute-0 nova_compute[189491]: 2025-12-01 09:24:17.497 189495 DEBUG nova.virt.libvirt.driver [None req-5a663ef5-aa97-4eb3-871f-7aca50005b00 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 350d2bc4-8489-4a5a-991a-99e32671f962] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 09:24:17 compute-0 nova_compute[189491]: 2025-12-01 09:24:17.498 189495 DEBUG nova.virt.libvirt.driver [None req-5a663ef5-aa97-4eb3-871f-7aca50005b00 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 350d2bc4-8489-4a5a-991a-99e32671f962] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 09:24:17 compute-0 nova_compute[189491]: 2025-12-01 09:24:17.498 189495 DEBUG nova.virt.libvirt.driver [None req-5a663ef5-aa97-4eb3-871f-7aca50005b00 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 350d2bc4-8489-4a5a-991a-99e32671f962] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 09:24:17 compute-0 nova_compute[189491]: 2025-12-01 09:24:17.502 189495 DEBUG nova.network.neutron [req-f3a18a3e-b548-43bd-a5ac-195cc07d9902 req-0f45f0e3-e28c-40ad-b0ef-e6abf3b615c4 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 350d2bc4-8489-4a5a-991a-99e32671f962] Updated VIF entry in instance network info cache for port a79ae82e-bfbc-4718-a23a-6d99c6057e19. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  1 09:24:17 compute-0 nova_compute[189491]: 2025-12-01 09:24:17.502 189495 DEBUG nova.network.neutron [req-f3a18a3e-b548-43bd-a5ac-195cc07d9902 req-0f45f0e3-e28c-40ad-b0ef-e6abf3b615c4 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 350d2bc4-8489-4a5a-991a-99e32671f962] Updating instance_info_cache with network_info: [{"id": "a79ae82e-bfbc-4718-a23a-6d99c6057e19", "address": "fa:16:3e:da:68:61", "network": {"id": "52d15875-2a2e-463a-bc5d-8fa6b8466bff", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.209", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fac95b8a995a4174bfa966a8d9d9aa01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa79ae82e-bf", "ovs_interfaceid": "a79ae82e-bfbc-4718-a23a-6d99c6057e19", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 09:24:17 compute-0 nova_compute[189491]: 2025-12-01 09:24:17.506 189495 DEBUG nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: 350d2bc4-8489-4a5a-991a-99e32671f962] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  1 09:24:17 compute-0 nova_compute[189491]: 2025-12-01 09:24:17.543 189495 DEBUG oslo_concurrency.lockutils [req-f3a18a3e-b548-43bd-a5ac-195cc07d9902 req-0f45f0e3-e28c-40ad-b0ef-e6abf3b615c4 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Releasing lock "refresh_cache-350d2bc4-8489-4a5a-991a-99e32671f962" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 09:24:17 compute-0 nova_compute[189491]: 2025-12-01 09:24:17.567 189495 INFO nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: 350d2bc4-8489-4a5a-991a-99e32671f962] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  1 09:24:17 compute-0 nova_compute[189491]: 2025-12-01 09:24:17.568 189495 DEBUG nova.virt.driver [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] Emitting event <LifecycleEvent: 1764581057.365047, 350d2bc4-8489-4a5a-991a-99e32671f962 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 09:24:17 compute-0 nova_compute[189491]: 2025-12-01 09:24:17.568 189495 INFO nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: 350d2bc4-8489-4a5a-991a-99e32671f962] VM Paused (Lifecycle Event)#033[00m
Dec  1 09:24:17 compute-0 nova_compute[189491]: 2025-12-01 09:24:17.580 189495 INFO nova.compute.manager [None req-5a663ef5-aa97-4eb3-871f-7aca50005b00 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 350d2bc4-8489-4a5a-991a-99e32671f962] Took 7.30 seconds to spawn the instance on the hypervisor.#033[00m
Dec  1 09:24:17 compute-0 nova_compute[189491]: 2025-12-01 09:24:17.580 189495 DEBUG nova.compute.manager [None req-5a663ef5-aa97-4eb3-871f-7aca50005b00 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 350d2bc4-8489-4a5a-991a-99e32671f962] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 09:24:17 compute-0 nova_compute[189491]: 2025-12-01 09:24:17.593 189495 DEBUG nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: 350d2bc4-8489-4a5a-991a-99e32671f962] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 09:24:17 compute-0 nova_compute[189491]: 2025-12-01 09:24:17.599 189495 DEBUG nova.virt.driver [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] Emitting event <LifecycleEvent: 1764581057.429702, 350d2bc4-8489-4a5a-991a-99e32671f962 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 09:24:17 compute-0 nova_compute[189491]: 2025-12-01 09:24:17.599 189495 INFO nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: 350d2bc4-8489-4a5a-991a-99e32671f962] VM Resumed (Lifecycle Event)#033[00m
Dec  1 09:24:17 compute-0 nova_compute[189491]: 2025-12-01 09:24:17.620 189495 DEBUG nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: 350d2bc4-8489-4a5a-991a-99e32671f962] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 09:24:17 compute-0 nova_compute[189491]: 2025-12-01 09:24:17.636 189495 DEBUG nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: 350d2bc4-8489-4a5a-991a-99e32671f962] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  1 09:24:17 compute-0 nova_compute[189491]: 2025-12-01 09:24:17.644 189495 INFO nova.compute.manager [None req-5a663ef5-aa97-4eb3-871f-7aca50005b00 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 350d2bc4-8489-4a5a-991a-99e32671f962] Took 8.30 seconds to build instance.#033[00m
Dec  1 09:24:17 compute-0 nova_compute[189491]: 2025-12-01 09:24:17.661 189495 DEBUG oslo_concurrency.lockutils [None req-5a663ef5-aa97-4eb3-871f-7aca50005b00 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lock "350d2bc4-8489-4a5a-991a-99e32671f962" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.557s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:24:18 compute-0 systemd[1]: Starting libvirt proxy daemon...
Dec  1 09:24:18 compute-0 systemd[1]: Started libvirt proxy daemon.
Dec  1 09:24:19 compute-0 nova_compute[189491]: 2025-12-01 09:24:19.577 189495 DEBUG nova.compute.manager [req-65baa432-d0c5-4ee0-8c43-1935fee7228c req-56773118-2931-4f2f-a30f-6edc2d4ca5bb ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 350d2bc4-8489-4a5a-991a-99e32671f962] Received event network-vif-plugged-a79ae82e-bfbc-4718-a23a-6d99c6057e19 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 09:24:19 compute-0 nova_compute[189491]: 2025-12-01 09:24:19.578 189495 DEBUG oslo_concurrency.lockutils [req-65baa432-d0c5-4ee0-8c43-1935fee7228c req-56773118-2931-4f2f-a30f-6edc2d4ca5bb ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Acquiring lock "350d2bc4-8489-4a5a-991a-99e32671f962-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:24:19 compute-0 nova_compute[189491]: 2025-12-01 09:24:19.578 189495 DEBUG oslo_concurrency.lockutils [req-65baa432-d0c5-4ee0-8c43-1935fee7228c req-56773118-2931-4f2f-a30f-6edc2d4ca5bb ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Lock "350d2bc4-8489-4a5a-991a-99e32671f962-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:24:19 compute-0 nova_compute[189491]: 2025-12-01 09:24:19.578 189495 DEBUG oslo_concurrency.lockutils [req-65baa432-d0c5-4ee0-8c43-1935fee7228c req-56773118-2931-4f2f-a30f-6edc2d4ca5bb ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Lock "350d2bc4-8489-4a5a-991a-99e32671f962-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:24:19 compute-0 nova_compute[189491]: 2025-12-01 09:24:19.579 189495 DEBUG nova.compute.manager [req-65baa432-d0c5-4ee0-8c43-1935fee7228c req-56773118-2931-4f2f-a30f-6edc2d4ca5bb ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 350d2bc4-8489-4a5a-991a-99e32671f962] No waiting events found dispatching network-vif-plugged-a79ae82e-bfbc-4718-a23a-6d99c6057e19 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 09:24:19 compute-0 nova_compute[189491]: 2025-12-01 09:24:19.579 189495 WARNING nova.compute.manager [req-65baa432-d0c5-4ee0-8c43-1935fee7228c req-56773118-2931-4f2f-a30f-6edc2d4ca5bb ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 350d2bc4-8489-4a5a-991a-99e32671f962] Received unexpected event network-vif-plugged-a79ae82e-bfbc-4718-a23a-6d99c6057e19 for instance with vm_state active and task_state None.#033[00m
Dec  1 09:24:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:19.782 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  1 09:24:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:19.783 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  1 09:24:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:19.783 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b020>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:24:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:19.783 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7ff84c98b0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:24:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:19.784 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84dc55100>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:24:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:19.784 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:24:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:19.785 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:24:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:19.785 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:24:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:19.785 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84ca1c260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:24:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:19.785 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:24:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:19.785 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:24:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:19.785 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84ff01af0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:24:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:19.785 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bb30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:24:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:19.786 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:24:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:19.786 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bb60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:24:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:19.786 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:24:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:19.786 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bbc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:24:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:19.786 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:24:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:19.786 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:24:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:19.787 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:24:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:19.787 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:24:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:19.787 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b5f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:24:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:19.787 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:24:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:19.787 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98be60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:24:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:19.787 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84f216690>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:24:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:19.787 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c9896d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:24:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:19.787 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:24:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:19.787 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98a720>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:24:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:19.787 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:24:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:19.792 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '7ed22ffd-011d-48d7-962a-8606e471a59e', 'name': 'test_0', 'flavor': {'id': '719a52fe-7f4b-48c0-b9dc-6a91d4ec600c', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '304c689d-2799-45ae-8166-517d5fd107b2'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'fac95b8a995a4174bfa966a8d9d9aa01', 'user_id': '962a55152ff34fdda5eae1f8aee3a7a9', 'hostId': '8e7812466a0145cddd68754a1627db4b56b0a23f372c34432aca44f1', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  1 09:24:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:19.795 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '11a8e94c-61e3-4805-b688-e4b9121b30ba', 'name': 'vn-a75cfa3-6buvcyjxf2ua-hietjgfclklq-vnf-3mwygpaab4vh', 'flavor': {'id': '719a52fe-7f4b-48c0-b9dc-6a91d4ec600c', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '304c689d-2799-45ae-8166-517d5fd107b2'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'fac95b8a995a4174bfa966a8d9d9aa01', 'user_id': '962a55152ff34fdda5eae1f8aee3a7a9', 'hostId': '8e7812466a0145cddd68754a1627db4b56b0a23f372c34432aca44f1', 'status': 'active', 'metadata': {'metering.server_group': '1555a697-b0aa-4429-98e7-26e6671e228d'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  1 09:24:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:19.799 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 350d2bc4-8489-4a5a-991a-99e32671f962 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Dec  1 09:24:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:19.800 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/350d2bc4-8489-4a5a-991a-99e32671f962 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}5b15b15c247f410e52837a95689cb091041b96c474d34a98b1d5f06140c01501" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Dec  1 09:24:20 compute-0 podman[243091]: 2025-12-01 09:24:20.779322857 +0000 UTC m=+0.140001529 container health_status e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec  1 09:24:20 compute-0 nova_compute[189491]: 2025-12-01 09:24:20.792 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.184 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1960 Content-Type: application/json Date: Mon, 01 Dec 2025 09:24:19 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-fabc9b53-d08b-43c9-ba98-df684dbebf0b x-openstack-request-id: req-fabc9b53-d08b-43c9-ba98-df684dbebf0b _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.184 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "350d2bc4-8489-4a5a-991a-99e32671f962", "name": "vn-a75cfa3-5bcj5tw5woc6-eld5euc3zwia-vnf-qwzf3cpwxtqu", "status": "ACTIVE", "tenant_id": "fac95b8a995a4174bfa966a8d9d9aa01", "user_id": "962a55152ff34fdda5eae1f8aee3a7a9", "metadata": {"metering.server_group": "1555a697-b0aa-4429-98e7-26e6671e228d"}, "hostId": "8e7812466a0145cddd68754a1627db4b56b0a23f372c34432aca44f1", "image": {"id": "304c689d-2799-45ae-8166-517d5fd107b2", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/304c689d-2799-45ae-8166-517d5fd107b2"}]}, "flavor": {"id": "719a52fe-7f4b-48c0-b9dc-6a91d4ec600c", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/719a52fe-7f4b-48c0-b9dc-6a91d4ec600c"}]}, "created": "2025-12-01T09:24:06Z", "updated": "2025-12-01T09:24:17Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.209", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:da:68:61"}, {"version": 4, "addr": "192.168.122.197", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:da:68:61"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/350d2bc4-8489-4a5a-991a-99e32671f962"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/350d2bc4-8489-4a5a-991a-99e32671f962"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-12-01T09:24:17.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000003", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.184 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/350d2bc4-8489-4a5a-991a-99e32671f962 used request id req-fabc9b53-d08b-43c9-ba98-df684dbebf0b request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.186 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '350d2bc4-8489-4a5a-991a-99e32671f962', 'name': 'vn-a75cfa3-5bcj5tw5woc6-eld5euc3zwia-vnf-qwzf3cpwxtqu', 'flavor': {'id': '719a52fe-7f4b-48c0-b9dc-6a91d4ec600c', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '304c689d-2799-45ae-8166-517d5fd107b2'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000003', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'fac95b8a995a4174bfa966a8d9d9aa01', 'user_id': '962a55152ff34fdda5eae1f8aee3a7a9', 'hostId': '8e7812466a0145cddd68754a1627db4b56b0a23f372c34432aca44f1', 'status': 'active', 'metadata': {'metering.server_group': '1555a697-b0aa-4429-98e7-26e6671e228d'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.186 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.186 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b020>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.187 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b020>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.187 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.188 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-01T09:24:21.187220) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.255 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.256 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.256 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.320 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.320 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.320 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.385 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.read.bytes volume: 18348032 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.385 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.read.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.385 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.read.bytes volume: 2048 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.386 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.386 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7ff8501e1d00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.386 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.386 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84dc55100>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.387 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84dc55100>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.387 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.387 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-01T09:24:21.387109) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.412 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.allocation volume: 21831680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.413 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.413 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.437 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.allocation volume: 21831680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.437 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.438 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.470 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.allocation volume: 204800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.470 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.allocation volume: 204800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.471 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.471 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.471 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7ff84c98b110>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.471 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.471 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b140>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.472 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b140>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.472 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.472 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.read.latency volume: 476643826 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.472 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.read.latency volume: 112985408 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.472 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.read.latency volume: 87581444 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.472 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.read.latency volume: 469977634 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.473 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.read.latency volume: 95101905 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.473 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.read.latency volume: 74341595 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.473 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.read.latency volume: 345642363 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.473 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.read.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.473 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.read.latency volume: 884321 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.474 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.474 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7ff84c98b1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.475 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.475 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.475 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.476 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-01T09:24:21.472130) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.476 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-01T09:24:21.476207) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.476 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.476 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.477 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.477 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.478 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.478 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.478 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.479 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.usage volume: 196624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.479 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.usage volume: 196624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.480 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.481 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.481 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7ff84c98b200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.481 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.481 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.482 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.482 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-01T09:24:21.482200) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.482 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.482 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.483 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.483 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.484 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.write.bytes volume: 41836544 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.484 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.484 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.485 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.485 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.486 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.487 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.487 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7ff84ca1c230>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.487 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.487 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84ca1c260>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.487 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84ca1c260>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.488 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.488 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-01T09:24:21.488166) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.515 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.545 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.567 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.568 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.568 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7ff84c98b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.568 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.568 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.569 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.569 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.569 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.write.latency volume: 1809136387 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.569 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.write.latency volume: 11785635 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.570 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.570 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.write.latency volume: 1287067524 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.571 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.write.latency volume: 13179146 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.572 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-01T09:24:21.569237) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.571 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.572 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.573 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.573 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.574 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.574 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7ff84c98b2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.575 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.575 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.575 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.575 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-01T09:24:21.575560) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.575 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.576 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.write.requests volume: 230 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.576 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.576 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.577 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.write.requests volume: 240 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.577 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.578 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.578 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.579 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.579 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.580 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.580 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7ff84c98b620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.581 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.581 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84ff01af0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.581 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84ff01af0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.581 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.581 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-01T09:24:21.581484) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.586 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/network.incoming.bytes volume: 2052 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.591 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/network.incoming.bytes volume: 4933 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.595 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 350d2bc4-8489-4a5a-991a-99e32671f962 / tapa79ae82e-bf inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.595 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/network.incoming.bytes volume: 90 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.596 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.596 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7ff84c98b680>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.596 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.596 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bb30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.596 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bb30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.596 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.596 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.596 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: vn-a75cfa3-5bcj5tw5woc6-eld5euc3zwia-vnf-qwzf3cpwxtqu>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-a75cfa3-5bcj5tw5woc6-eld5euc3zwia-vnf-qwzf3cpwxtqu>]
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.597 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7ff84c98b320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.597 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.597 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.597 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-12-01T09:24:21.596622) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.597 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.597 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.597 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-01T09:24:21.597546) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.598 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.598 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7ff84c98b920>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.599 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.599 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bb60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.599 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bb60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.599 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.599 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/network.incoming.packets volume: 19 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.600 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-01T09:24:21.599639) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.601 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/network.incoming.packets volume: 33 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.601 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/network.incoming.packets volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.602 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.602 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7ff84c98b380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.602 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.603 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b3b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.603 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b3b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.603 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.604 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.605 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7ff84c98bb90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.605 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.605 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bbc0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.606 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-01T09:24:21.603522) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.606 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bbc0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.606 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.607 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.607 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-01T09:24:21.606419) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.607 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.608 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.609 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.609 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7ff84c98bbf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.609 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.609 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bc20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.609 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bc20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.609 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.609 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.610 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-01T09:24:21.609781) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.610 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.610 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.611 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.611 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7ff84c98bc80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.611 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.611 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bcb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.611 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bcb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.612 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.612 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/network.outgoing.bytes volume: 2272 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.612 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/network.outgoing.bytes volume: 4830 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.613 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-01T09:24:21.612079) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.613 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/network.outgoing.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.613 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.613 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7ff84c98bd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.614 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.614 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bd40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.614 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bd40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.614 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.614 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-01T09:24:21.614312) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.614 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.615 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.615 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.615 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.616 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7ff84c98bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.616 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.616 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bdd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.616 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bdd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.616 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.616 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.616 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-12-01T09:24:21.616577) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.617 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: vn-a75cfa3-5bcj5tw5woc6-eld5euc3zwia-vnf-qwzf3cpwxtqu>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-a75cfa3-5bcj5tw5woc6-eld5euc3zwia-vnf-qwzf3cpwxtqu>]
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.617 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7ff84c98b5c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.617 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.617 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b5f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.618 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b5f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.618 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.618 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/memory.usage volume: 48.82421875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.618 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/memory.usage volume: 49.06640625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.619 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-01T09:24:21.618165) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.619 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/memory.usage volume: Unavailable _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.619 14 WARNING ceilometer.compute.pollsters [-] memory.usage statistic in not available for instance 350d2bc4-8489-4a5a-991a-99e32671f962: ceilometer.compute.pollsters.NoVolumeException
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.619 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.620 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7ff84dc55040>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.620 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.620 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b650>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.620 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b650>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.620 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.621 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.621 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-01T09:24:21.620763) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.621 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.621 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.622 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.622 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7ff84c98be30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.622 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.622 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98be60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.622 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98be60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.623 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.623 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.623 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/network.outgoing.packets volume: 42 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.624 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-01T09:24:21.623134) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.624 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/network.outgoing.packets volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.625 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.625 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7ff8503b1b80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.625 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.625 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84f216690>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.625 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84f216690>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.626 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.626 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-01T09:24:21.625902) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.626 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/cpu volume: 33810000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.626 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/cpu volume: 315290000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.627 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/cpu volume: 3960000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.627 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.627 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7ff84dab3f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.627 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.627 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c9896d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.628 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c9896d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.628 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.628 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.628 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-01T09:24:21.628184) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.628 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.629 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.629 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.629 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.629 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.630 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.630 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.630 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.631 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.631 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7ff84c98bec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.631 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.631 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bef0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.631 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bef0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.631 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.632 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.632 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.632 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.633 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.633 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7ff84c98b170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.633 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.633 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98a720>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.634 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-01T09:24:21.631847) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.634 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98a720>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.634 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.634 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.634 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.635 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.635 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.635 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.636 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.636 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.read.requests volume: 573 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.636 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.read.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.636 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.read.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.637 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-01T09:24:21.634240) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.637 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.638 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7ff84c98bf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.638 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.638 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bf80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.638 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bf80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.638 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.638 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.639 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-01T09:24:21.638678) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.639 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.639 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.640 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.640 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.640 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.640 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.641 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.641 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.641 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.641 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.641 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.641 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.641 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.641 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.641 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.641 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.642 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.642 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.642 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.642 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.642 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.642 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.642 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.642 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.642 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.642 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.643 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.643 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:24:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:24:21.643 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:24:22 compute-0 nova_compute[189491]: 2025-12-01 09:24:22.447 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:24:22 compute-0 nova_compute[189491]: 2025-12-01 09:24:22.716 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:24:22 compute-0 nova_compute[189491]: 2025-12-01 09:24:22.716 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 09:24:22 compute-0 nova_compute[189491]: 2025-12-01 09:24:22.717 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 09:24:23 compute-0 nova_compute[189491]: 2025-12-01 09:24:23.442 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "refresh_cache-7ed22ffd-011d-48d7-962a-8606e471a59e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 09:24:23 compute-0 nova_compute[189491]: 2025-12-01 09:24:23.442 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquired lock "refresh_cache-7ed22ffd-011d-48d7-962a-8606e471a59e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 09:24:23 compute-0 nova_compute[189491]: 2025-12-01 09:24:23.442 189495 DEBUG nova.network.neutron [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] [instance: 7ed22ffd-011d-48d7-962a-8606e471a59e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  1 09:24:23 compute-0 nova_compute[189491]: 2025-12-01 09:24:23.442 189495 DEBUG nova.objects.instance [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 7ed22ffd-011d-48d7-962a-8606e471a59e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 09:24:24 compute-0 podman[243112]: 2025-12-01 09:24:24.719864118 +0000 UTC m=+0.088836610 container health_status dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 09:24:24 compute-0 podman[243113]: 2025-12-01 09:24:24.75052075 +0000 UTC m=+0.123788076 container health_status f2d8639519b361376780abb83cb5a5e341c4f412720fb5852b2a4d261a69c359 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, io.buildah.version=1.29.0, io.openshift.expose-services=, vcs-type=git, com.redhat.component=ubi9-container, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, managed_by=edpm_ansible, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, vendor=Red Hat, Inc., architecture=x86_64, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release=1214.1726694543, io.openshift.tags=base rhel9)
Dec  1 09:24:25 compute-0 nova_compute[189491]: 2025-12-01 09:24:25.795 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:24:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:24:26.509 106659 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:24:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:24:26.511 106659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:24:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:24:26.512 106659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:24:26 compute-0 nova_compute[189491]: 2025-12-01 09:24:26.655 189495 DEBUG nova.network.neutron [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] [instance: 7ed22ffd-011d-48d7-962a-8606e471a59e] Updating instance_info_cache with network_info: [{"id": "1632735e-15c5-4d6b-a450-baa001b88ac2", "address": "fa:16:3e:d4:bd:b4", "network": {"id": "52d15875-2a2e-463a-bc5d-8fa6b8466bff", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.55", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fac95b8a995a4174bfa966a8d9d9aa01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1632735e-15", "ovs_interfaceid": "1632735e-15c5-4d6b-a450-baa001b88ac2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 09:24:26 compute-0 nova_compute[189491]: 2025-12-01 09:24:26.724 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Releasing lock "refresh_cache-7ed22ffd-011d-48d7-962a-8606e471a59e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 09:24:26 compute-0 nova_compute[189491]: 2025-12-01 09:24:26.725 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] [instance: 7ed22ffd-011d-48d7-962a-8606e471a59e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  1 09:24:26 compute-0 nova_compute[189491]: 2025-12-01 09:24:26.726 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:24:26 compute-0 nova_compute[189491]: 2025-12-01 09:24:26.727 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:24:26 compute-0 nova_compute[189491]: 2025-12-01 09:24:26.728 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:24:26 compute-0 nova_compute[189491]: 2025-12-01 09:24:26.788 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:24:26 compute-0 nova_compute[189491]: 2025-12-01 09:24:26.789 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:24:26 compute-0 nova_compute[189491]: 2025-12-01 09:24:26.790 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:24:26 compute-0 nova_compute[189491]: 2025-12-01 09:24:26.791 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 09:24:26 compute-0 nova_compute[189491]: 2025-12-01 09:24:26.950 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:24:27 compute-0 nova_compute[189491]: 2025-12-01 09:24:27.023 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:24:27 compute-0 nova_compute[189491]: 2025-12-01 09:24:27.025 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:24:27 compute-0 nova_compute[189491]: 2025-12-01 09:24:27.093 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:24:27 compute-0 nova_compute[189491]: 2025-12-01 09:24:27.095 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:24:27 compute-0 nova_compute[189491]: 2025-12-01 09:24:27.165 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk.eph0 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:24:27 compute-0 nova_compute[189491]: 2025-12-01 09:24:27.167 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:24:27 compute-0 nova_compute[189491]: 2025-12-01 09:24:27.234 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk.eph0 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:24:27 compute-0 nova_compute[189491]: 2025-12-01 09:24:27.263 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:24:27 compute-0 nova_compute[189491]: 2025-12-01 09:24:27.332 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:24:27 compute-0 nova_compute[189491]: 2025-12-01 09:24:27.335 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:24:27 compute-0 nova_compute[189491]: 2025-12-01 09:24:27.397 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:24:27 compute-0 nova_compute[189491]: 2025-12-01 09:24:27.399 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:24:27 compute-0 nova_compute[189491]: 2025-12-01 09:24:27.452 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:24:27 compute-0 nova_compute[189491]: 2025-12-01 09:24:27.463 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.eph0 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:24:27 compute-0 nova_compute[189491]: 2025-12-01 09:24:27.464 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:24:27 compute-0 nova_compute[189491]: 2025-12-01 09:24:27.532 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.eph0 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:24:27 compute-0 nova_compute[189491]: 2025-12-01 09:24:27.542 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/350d2bc4-8489-4a5a-991a-99e32671f962/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:24:27 compute-0 nova_compute[189491]: 2025-12-01 09:24:27.614 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/350d2bc4-8489-4a5a-991a-99e32671f962/disk --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:24:27 compute-0 nova_compute[189491]: 2025-12-01 09:24:27.616 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/350d2bc4-8489-4a5a-991a-99e32671f962/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:24:27 compute-0 nova_compute[189491]: 2025-12-01 09:24:27.675 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/350d2bc4-8489-4a5a-991a-99e32671f962/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:24:27 compute-0 nova_compute[189491]: 2025-12-01 09:24:27.676 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/350d2bc4-8489-4a5a-991a-99e32671f962/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:24:27 compute-0 nova_compute[189491]: 2025-12-01 09:24:27.737 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/350d2bc4-8489-4a5a-991a-99e32671f962/disk.eph0 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:24:27 compute-0 nova_compute[189491]: 2025-12-01 09:24:27.739 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/350d2bc4-8489-4a5a-991a-99e32671f962/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:24:27 compute-0 nova_compute[189491]: 2025-12-01 09:24:27.799 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/350d2bc4-8489-4a5a-991a-99e32671f962/disk.eph0 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:24:28 compute-0 nova_compute[189491]: 2025-12-01 09:24:28.258 189495 WARNING nova.virt.libvirt.driver [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 09:24:28 compute-0 nova_compute[189491]: 2025-12-01 09:24:28.260 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4926MB free_disk=72.36395263671875GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 09:24:28 compute-0 nova_compute[189491]: 2025-12-01 09:24:28.260 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:24:28 compute-0 nova_compute[189491]: 2025-12-01 09:24:28.262 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:24:28 compute-0 nova_compute[189491]: 2025-12-01 09:24:28.391 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Instance 7ed22ffd-011d-48d7-962a-8606e471a59e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 09:24:28 compute-0 nova_compute[189491]: 2025-12-01 09:24:28.391 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Instance 11a8e94c-61e3-4805-b688-e4b9121b30ba actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 09:24:28 compute-0 nova_compute[189491]: 2025-12-01 09:24:28.392 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Instance 350d2bc4-8489-4a5a-991a-99e32671f962 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 09:24:28 compute-0 nova_compute[189491]: 2025-12-01 09:24:28.392 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 09:24:28 compute-0 nova_compute[189491]: 2025-12-01 09:24:28.393 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=2048MB phys_disk=79GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 09:24:28 compute-0 nova_compute[189491]: 2025-12-01 09:24:28.477 189495 DEBUG nova.compute.provider_tree [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Inventory has not changed in ProviderTree for provider: 143c7fe7-af1f-477a-978c-6a994d785d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 09:24:28 compute-0 nova_compute[189491]: 2025-12-01 09:24:28.495 189495 DEBUG nova.scheduler.client.report [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Inventory has not changed for provider 143c7fe7-af1f-477a-978c-6a994d785d98 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 09:24:28 compute-0 nova_compute[189491]: 2025-12-01 09:24:28.578 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 09:24:28 compute-0 nova_compute[189491]: 2025-12-01 09:24:28.579 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.317s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:24:29 compute-0 nova_compute[189491]: 2025-12-01 09:24:29.569 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:24:29 compute-0 nova_compute[189491]: 2025-12-01 09:24:29.569 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:24:29 compute-0 nova_compute[189491]: 2025-12-01 09:24:29.569 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:24:29 compute-0 nova_compute[189491]: 2025-12-01 09:24:29.570 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:24:29 compute-0 nova_compute[189491]: 2025-12-01 09:24:29.570 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 09:24:29 compute-0 podman[243191]: 2025-12-01 09:24:29.704741268 +0000 UTC m=+0.074519484 container health_status f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 09:24:29 compute-0 podman[243190]: 2025-12-01 09:24:29.708186822 +0000 UTC m=+0.085400968 container health_status 110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.buildah.version=1.33.7, version=9.6, architecture=x86_64, io.openshift.expose-services=, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, config_id=edpm, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, name=ubi9-minimal, release=1755695350, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Dec  1 09:24:29 compute-0 nova_compute[189491]: 2025-12-01 09:24:29.714 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:24:29 compute-0 podman[203700]: time="2025-12-01T09:24:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 09:24:29 compute-0 podman[203700]: @ - - [01/Dec/2025:09:24:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Dec  1 09:24:29 compute-0 podman[203700]: @ - - [01/Dec/2025:09:24:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4805 "" "Go-http-client/1.1"
Dec  1 09:24:30 compute-0 nova_compute[189491]: 2025-12-01 09:24:30.798 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:24:31 compute-0 openstack_network_exporter[205866]: ERROR   09:24:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:24:31 compute-0 openstack_network_exporter[205866]: ERROR   09:24:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:24:31 compute-0 openstack_network_exporter[205866]: ERROR   09:24:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 09:24:31 compute-0 openstack_network_exporter[205866]: ERROR   09:24:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 09:24:31 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:24:31 compute-0 openstack_network_exporter[205866]: ERROR   09:24:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 09:24:31 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:24:32 compute-0 nova_compute[189491]: 2025-12-01 09:24:32.455 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:24:33 compute-0 podman[243227]: 2025-12-01 09:24:33.698180941 +0000 UTC m=+0.073247644 container health_status 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 09:24:33 compute-0 podman[243228]: 2025-12-01 09:24:33.73581446 +0000 UTC m=+0.105566764 container health_status 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 09:24:35 compute-0 nova_compute[189491]: 2025-12-01 09:24:35.801 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:24:37 compute-0 nova_compute[189491]: 2025-12-01 09:24:37.455 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:24:40 compute-0 nova_compute[189491]: 2025-12-01 09:24:40.804 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:24:42 compute-0 nova_compute[189491]: 2025-12-01 09:24:42.457 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:24:45 compute-0 podman[243274]: 2025-12-01 09:24:45.005808251 +0000 UTC m=+0.093412752 container health_status ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0)
Dec  1 09:24:45 compute-0 podman[243273]: 2025-12-01 09:24:45.017088423 +0000 UTC m=+0.103592257 container health_status 6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  1 09:24:45 compute-0 nova_compute[189491]: 2025-12-01 09:24:45.807 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:24:46 compute-0 ovn_controller[97794]: 2025-12-01T09:24:46Z|00044|memory_trim|INFO|Detected inactivity (last active 30014 ms ago): trimming memory
Dec  1 09:24:47 compute-0 nova_compute[189491]: 2025-12-01 09:24:47.459 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:24:48 compute-0 ovn_controller[97794]: 2025-12-01T09:24:48Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:da:68:61 192.168.0.209
Dec  1 09:24:48 compute-0 ovn_controller[97794]: 2025-12-01T09:24:48Z|00009|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:da:68:61 192.168.0.209
Dec  1 09:24:50 compute-0 nova_compute[189491]: 2025-12-01 09:24:50.810 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:24:51 compute-0 podman[243322]: 2025-12-01 09:24:51.751552535 +0000 UTC m=+0.124729710 container health_status e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec  1 09:24:52 compute-0 nova_compute[189491]: 2025-12-01 09:24:52.464 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:24:55 compute-0 podman[243343]: 2025-12-01 09:24:55.697429046 +0000 UTC m=+0.070704512 container health_status dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  1 09:24:55 compute-0 podman[243344]: 2025-12-01 09:24:55.707462169 +0000 UTC m=+0.077796094 container health_status f2d8639519b361376780abb83cb5a5e341c4f412720fb5852b2a4d261a69c359 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, maintainer=Red Hat, Inc., architecture=x86_64, managed_by=edpm_ansible, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=edpm, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, container_name=kepler, release-0.7.12=, vcs-type=git, vendor=Red Hat, Inc., release=1214.1726694543, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec  1 09:24:55 compute-0 nova_compute[189491]: 2025-12-01 09:24:55.813 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:24:57 compute-0 nova_compute[189491]: 2025-12-01 09:24:57.469 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:24:59 compute-0 podman[203700]: time="2025-12-01T09:24:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 09:24:59 compute-0 podman[203700]: @ - - [01/Dec/2025:09:24:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Dec  1 09:24:59 compute-0 podman[203700]: @ - - [01/Dec/2025:09:24:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4804 "" "Go-http-client/1.1"
Dec  1 09:25:00 compute-0 podman[243386]: 2025-12-01 09:25:00.736645941 +0000 UTC m=+0.097825038 container health_status 110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., name=ubi9-minimal, version=9.6, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350)
Dec  1 09:25:00 compute-0 podman[243387]: 2025-12-01 09:25:00.756431779 +0000 UTC m=+0.126703777 container health_status f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec  1 09:25:00 compute-0 nova_compute[189491]: 2025-12-01 09:25:00.816 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:25:01 compute-0 openstack_network_exporter[205866]: ERROR   09:25:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 09:25:01 compute-0 openstack_network_exporter[205866]: ERROR   09:25:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:25:01 compute-0 openstack_network_exporter[205866]: ERROR   09:25:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:25:01 compute-0 openstack_network_exporter[205866]: ERROR   09:25:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 09:25:01 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:25:01 compute-0 openstack_network_exporter[205866]: ERROR   09:25:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 09:25:01 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:25:02 compute-0 nova_compute[189491]: 2025-12-01 09:25:02.470 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:25:04 compute-0 podman[243425]: 2025-12-01 09:25:04.72504601 +0000 UTC m=+0.092693873 container health_status 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Dec  1 09:25:04 compute-0 podman[243426]: 2025-12-01 09:25:04.765485859 +0000 UTC m=+0.125222731 container health_status 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Dec  1 09:25:05 compute-0 nova_compute[189491]: 2025-12-01 09:25:05.819 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:25:07 compute-0 nova_compute[189491]: 2025-12-01 09:25:07.477 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:25:10 compute-0 nova_compute[189491]: 2025-12-01 09:25:10.823 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:25:12 compute-0 nova_compute[189491]: 2025-12-01 09:25:12.479 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:25:15 compute-0 podman[243471]: 2025-12-01 09:25:15.743029042 +0000 UTC m=+0.104893278 container health_status 6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  1 09:25:15 compute-0 podman[243472]: 2025-12-01 09:25:15.757303648 +0000 UTC m=+0.115662600 container health_status ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, container_name=ceilometer_agent_compute)
Dec  1 09:25:15 compute-0 nova_compute[189491]: 2025-12-01 09:25:15.827 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:25:17 compute-0 nova_compute[189491]: 2025-12-01 09:25:17.481 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:25:20 compute-0 nova_compute[189491]: 2025-12-01 09:25:20.830 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:25:22 compute-0 nova_compute[189491]: 2025-12-01 09:25:22.482 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:25:22 compute-0 podman[243516]: 2025-12-01 09:25:22.709471416 +0000 UTC m=+0.082245131 container health_status e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 09:25:23 compute-0 nova_compute[189491]: 2025-12-01 09:25:23.714 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:25:23 compute-0 nova_compute[189491]: 2025-12-01 09:25:23.714 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 09:25:24 compute-0 nova_compute[189491]: 2025-12-01 09:25:24.091 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "refresh_cache-11a8e94c-61e3-4805-b688-e4b9121b30ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 09:25:24 compute-0 nova_compute[189491]: 2025-12-01 09:25:24.092 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquired lock "refresh_cache-11a8e94c-61e3-4805-b688-e4b9121b30ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 09:25:24 compute-0 nova_compute[189491]: 2025-12-01 09:25:24.094 189495 DEBUG nova.network.neutron [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] [instance: 11a8e94c-61e3-4805-b688-e4b9121b30ba] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  1 09:25:25 compute-0 nova_compute[189491]: 2025-12-01 09:25:25.160 189495 DEBUG nova.network.neutron [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] [instance: 11a8e94c-61e3-4805-b688-e4b9121b30ba] Updating instance_info_cache with network_info: [{"id": "213d57d5-9e28-4606-937a-97375a401f82", "address": "fa:16:3e:03:b9:7c", "network": {"id": "52d15875-2a2e-463a-bc5d-8fa6b8466bff", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.178", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fac95b8a995a4174bfa966a8d9d9aa01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap213d57d5-9e", "ovs_interfaceid": "213d57d5-9e28-4606-937a-97375a401f82", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 09:25:25 compute-0 nova_compute[189491]: 2025-12-01 09:25:25.178 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Releasing lock "refresh_cache-11a8e94c-61e3-4805-b688-e4b9121b30ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 09:25:25 compute-0 nova_compute[189491]: 2025-12-01 09:25:25.179 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] [instance: 11a8e94c-61e3-4805-b688-e4b9121b30ba] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  1 09:25:25 compute-0 nova_compute[189491]: 2025-12-01 09:25:25.713 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:25:25 compute-0 nova_compute[189491]: 2025-12-01 09:25:25.834 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:25:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:25:26.511 106659 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:25:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:25:26.512 106659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:25:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:25:26.513 106659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:25:26 compute-0 nova_compute[189491]: 2025-12-01 09:25:26.714 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:25:26 compute-0 podman[243537]: 2025-12-01 09:25:26.726268093 +0000 UTC m=+0.085560401 container health_status dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  1 09:25:26 compute-0 podman[243538]: 2025-12-01 09:25:26.777210976 +0000 UTC m=+0.129833982 container health_status f2d8639519b361376780abb83cb5a5e341c4f412720fb5852b2a4d261a69c359 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, release-0.7.12=, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., io.openshift.tags=base rhel9, name=ubi9, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, version=9.4, com.redhat.component=ubi9-container, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, distribution-scope=public)
Dec  1 09:25:27 compute-0 nova_compute[189491]: 2025-12-01 09:25:27.485 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:25:27 compute-0 nova_compute[189491]: 2025-12-01 09:25:27.708 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:25:27 compute-0 nova_compute[189491]: 2025-12-01 09:25:27.713 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:25:27 compute-0 nova_compute[189491]: 2025-12-01 09:25:27.746 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:25:27 compute-0 nova_compute[189491]: 2025-12-01 09:25:27.748 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:25:27 compute-0 nova_compute[189491]: 2025-12-01 09:25:27.750 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:25:27 compute-0 nova_compute[189491]: 2025-12-01 09:25:27.751 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 09:25:27 compute-0 nova_compute[189491]: 2025-12-01 09:25:27.860 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:25:27 compute-0 nova_compute[189491]: 2025-12-01 09:25:27.942 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:25:27 compute-0 nova_compute[189491]: 2025-12-01 09:25:27.944 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:25:28 compute-0 nova_compute[189491]: 2025-12-01 09:25:28.025 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:25:28 compute-0 nova_compute[189491]: 2025-12-01 09:25:28.027 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:25:28 compute-0 nova_compute[189491]: 2025-12-01 09:25:28.117 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk.eph0 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:25:28 compute-0 nova_compute[189491]: 2025-12-01 09:25:28.118 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:25:28 compute-0 nova_compute[189491]: 2025-12-01 09:25:28.182 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk.eph0 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:25:28 compute-0 nova_compute[189491]: 2025-12-01 09:25:28.191 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:25:28 compute-0 nova_compute[189491]: 2025-12-01 09:25:28.248 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:25:28 compute-0 nova_compute[189491]: 2025-12-01 09:25:28.250 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:25:28 compute-0 nova_compute[189491]: 2025-12-01 09:25:28.335 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:25:28 compute-0 nova_compute[189491]: 2025-12-01 09:25:28.337 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:25:28 compute-0 nova_compute[189491]: 2025-12-01 09:25:28.416 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.eph0 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:25:28 compute-0 nova_compute[189491]: 2025-12-01 09:25:28.420 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:25:28 compute-0 nova_compute[189491]: 2025-12-01 09:25:28.493 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.eph0 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:25:28 compute-0 nova_compute[189491]: 2025-12-01 09:25:28.503 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/350d2bc4-8489-4a5a-991a-99e32671f962/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:25:28 compute-0 nova_compute[189491]: 2025-12-01 09:25:28.573 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/350d2bc4-8489-4a5a-991a-99e32671f962/disk --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:25:28 compute-0 nova_compute[189491]: 2025-12-01 09:25:28.575 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/350d2bc4-8489-4a5a-991a-99e32671f962/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:25:28 compute-0 nova_compute[189491]: 2025-12-01 09:25:28.667 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/350d2bc4-8489-4a5a-991a-99e32671f962/disk --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:25:28 compute-0 nova_compute[189491]: 2025-12-01 09:25:28.669 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/350d2bc4-8489-4a5a-991a-99e32671f962/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:25:28 compute-0 nova_compute[189491]: 2025-12-01 09:25:28.754 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/350d2bc4-8489-4a5a-991a-99e32671f962/disk.eph0 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:25:28 compute-0 nova_compute[189491]: 2025-12-01 09:25:28.755 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/350d2bc4-8489-4a5a-991a-99e32671f962/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:25:28 compute-0 nova_compute[189491]: 2025-12-01 09:25:28.821 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/350d2bc4-8489-4a5a-991a-99e32671f962/disk.eph0 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:25:29 compute-0 nova_compute[189491]: 2025-12-01 09:25:29.166 189495 WARNING nova.virt.libvirt.driver [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 09:25:29 compute-0 nova_compute[189491]: 2025-12-01 09:25:29.168 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4894MB free_disk=72.34297561645508GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 09:25:29 compute-0 nova_compute[189491]: 2025-12-01 09:25:29.169 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:25:29 compute-0 nova_compute[189491]: 2025-12-01 09:25:29.170 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:25:29 compute-0 nova_compute[189491]: 2025-12-01 09:25:29.279 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Instance 7ed22ffd-011d-48d7-962a-8606e471a59e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 09:25:29 compute-0 nova_compute[189491]: 2025-12-01 09:25:29.280 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Instance 11a8e94c-61e3-4805-b688-e4b9121b30ba actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 09:25:29 compute-0 nova_compute[189491]: 2025-12-01 09:25:29.280 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Instance 350d2bc4-8489-4a5a-991a-99e32671f962 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 09:25:29 compute-0 nova_compute[189491]: 2025-12-01 09:25:29.280 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 09:25:29 compute-0 nova_compute[189491]: 2025-12-01 09:25:29.281 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=2048MB phys_disk=79GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 09:25:29 compute-0 nova_compute[189491]: 2025-12-01 09:25:29.382 189495 DEBUG nova.compute.provider_tree [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Inventory has not changed in ProviderTree for provider: 143c7fe7-af1f-477a-978c-6a994d785d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 09:25:29 compute-0 nova_compute[189491]: 2025-12-01 09:25:29.402 189495 DEBUG nova.scheduler.client.report [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Inventory has not changed for provider 143c7fe7-af1f-477a-978c-6a994d785d98 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 09:25:29 compute-0 nova_compute[189491]: 2025-12-01 09:25:29.404 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 09:25:29 compute-0 nova_compute[189491]: 2025-12-01 09:25:29.404 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.235s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:25:29 compute-0 podman[203700]: time="2025-12-01T09:25:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 09:25:29 compute-0 podman[203700]: @ - - [01/Dec/2025:09:25:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Dec  1 09:25:29 compute-0 podman[203700]: @ - - [01/Dec/2025:09:25:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4798 "" "Go-http-client/1.1"
Dec  1 09:25:30 compute-0 nova_compute[189491]: 2025-12-01 09:25:30.837 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:25:31 compute-0 nova_compute[189491]: 2025-12-01 09:25:31.406 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:25:31 compute-0 openstack_network_exporter[205866]: ERROR   09:25:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 09:25:31 compute-0 openstack_network_exporter[205866]: ERROR   09:25:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:25:31 compute-0 openstack_network_exporter[205866]: ERROR   09:25:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:25:31 compute-0 openstack_network_exporter[205866]: ERROR   09:25:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 09:25:31 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:25:31 compute-0 openstack_network_exporter[205866]: ERROR   09:25:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 09:25:31 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:25:31 compute-0 nova_compute[189491]: 2025-12-01 09:25:31.505 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:25:31 compute-0 nova_compute[189491]: 2025-12-01 09:25:31.506 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:25:31 compute-0 nova_compute[189491]: 2025-12-01 09:25:31.507 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:25:31 compute-0 nova_compute[189491]: 2025-12-01 09:25:31.508 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:25:31 compute-0 nova_compute[189491]: 2025-12-01 09:25:31.509 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 09:25:31 compute-0 podman[243616]: 2025-12-01 09:25:31.726250389 +0000 UTC m=+0.083535483 container health_status f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 09:25:31 compute-0 podman[243615]: 2025-12-01 09:25:31.766976244 +0000 UTC m=+0.126866010 container health_status 110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, version=9.6, managed_by=edpm_ansible, name=ubi9-minimal, architecture=x86_64, container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, build-date=2025-08-20T13:12:41, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vendor=Red Hat, Inc.)
Dec  1 09:25:32 compute-0 nova_compute[189491]: 2025-12-01 09:25:32.487 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:25:35 compute-0 podman[243654]: 2025-12-01 09:25:35.730852931 +0000 UTC m=+0.092640993 container health_status 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  1 09:25:35 compute-0 podman[243655]: 2025-12-01 09:25:35.791025107 +0000 UTC m=+0.150516483 container health_status 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 09:25:35 compute-0 nova_compute[189491]: 2025-12-01 09:25:35.841 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:25:37 compute-0 nova_compute[189491]: 2025-12-01 09:25:37.491 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:25:40 compute-0 nova_compute[189491]: 2025-12-01 09:25:40.844 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:25:42 compute-0 nova_compute[189491]: 2025-12-01 09:25:42.495 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:25:45 compute-0 nova_compute[189491]: 2025-12-01 09:25:45.848 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:25:46 compute-0 podman[243696]: 2025-12-01 09:25:46.700580785 +0000 UTC m=+0.078450509 container health_status 6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  1 09:25:46 compute-0 podman[243697]: 2025-12-01 09:25:46.735610343 +0000 UTC m=+0.109920531 container health_status ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec  1 09:25:47 compute-0 nova_compute[189491]: 2025-12-01 09:25:47.498 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:25:50 compute-0 nova_compute[189491]: 2025-12-01 09:25:50.853 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:25:52 compute-0 nova_compute[189491]: 2025-12-01 09:25:52.501 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:25:53 compute-0 podman[243736]: 2025-12-01 09:25:53.7563232 +0000 UTC m=+0.102699356 container health_status e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3)
Dec  1 09:25:54 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:25:54.374 106659 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:2b:76', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'f6:fe:a3:90:0a:20'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 09:25:54 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:25:54.375 106659 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  1 09:25:54 compute-0 nova_compute[189491]: 2025-12-01 09:25:54.386 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:25:55 compute-0 nova_compute[189491]: 2025-12-01 09:25:55.857 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:25:57 compute-0 nova_compute[189491]: 2025-12-01 09:25:57.503 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:25:57 compute-0 podman[243754]: 2025-12-01 09:25:57.73106715 +0000 UTC m=+0.092929970 container health_status dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  1 09:25:57 compute-0 podman[243755]: 2025-12-01 09:25:57.763628147 +0000 UTC m=+0.113015225 container health_status f2d8639519b361376780abb83cb5a5e341c4f412720fb5852b2a4d261a69c359 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, io.openshift.expose-services=, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, name=ubi9, release-0.7.12=, container_name=kepler, release=1214.1726694543, version=9.4, com.redhat.component=ubi9-container, config_id=edpm, distribution-scope=public, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.display-name=Red Hat Universal Base Image 9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Dec  1 09:25:59 compute-0 podman[203700]: time="2025-12-01T09:25:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 09:25:59 compute-0 podman[203700]: @ - - [01/Dec/2025:09:25:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Dec  1 09:25:59 compute-0 podman[203700]: @ - - [01/Dec/2025:09:25:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4806 "" "Go-http-client/1.1"
Dec  1 09:26:00 compute-0 nova_compute[189491]: 2025-12-01 09:26:00.863 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:26:01 compute-0 nova_compute[189491]: 2025-12-01 09:26:01.090 189495 DEBUG oslo_concurrency.lockutils [None req-e15013f0-6e5f-4c82-bc5b-b40252a0adfd 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Acquiring lock "97dcaede-87ef-4c1c-a4a8-4ec9587cfe86" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:26:01 compute-0 nova_compute[189491]: 2025-12-01 09:26:01.091 189495 DEBUG oslo_concurrency.lockutils [None req-e15013f0-6e5f-4c82-bc5b-b40252a0adfd 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lock "97dcaede-87ef-4c1c-a4a8-4ec9587cfe86" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:26:01 compute-0 nova_compute[189491]: 2025-12-01 09:26:01.116 189495 DEBUG nova.compute.manager [None req-e15013f0-6e5f-4c82-bc5b-b40252a0adfd 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  1 09:26:01 compute-0 nova_compute[189491]: 2025-12-01 09:26:01.208 189495 DEBUG oslo_concurrency.lockutils [None req-e15013f0-6e5f-4c82-bc5b-b40252a0adfd 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:26:01 compute-0 nova_compute[189491]: 2025-12-01 09:26:01.209 189495 DEBUG oslo_concurrency.lockutils [None req-e15013f0-6e5f-4c82-bc5b-b40252a0adfd 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:26:01 compute-0 nova_compute[189491]: 2025-12-01 09:26:01.226 189495 DEBUG nova.virt.hardware [None req-e15013f0-6e5f-4c82-bc5b-b40252a0adfd 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  1 09:26:01 compute-0 nova_compute[189491]: 2025-12-01 09:26:01.227 189495 INFO nova.compute.claims [None req-e15013f0-6e5f-4c82-bc5b-b40252a0adfd 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  1 09:26:01 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:26:01.383 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=203a4433-d8f4-4d80-8084-548a6d57cd5d, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:26:01 compute-0 openstack_network_exporter[205866]: ERROR   09:26:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:26:01 compute-0 openstack_network_exporter[205866]: ERROR   09:26:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:26:01 compute-0 openstack_network_exporter[205866]: ERROR   09:26:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 09:26:01 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:26:01 compute-0 openstack_network_exporter[205866]: ERROR   09:26:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 09:26:01 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:26:01 compute-0 openstack_network_exporter[205866]: ERROR   09:26:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 09:26:01 compute-0 nova_compute[189491]: 2025-12-01 09:26:01.496 189495 DEBUG nova.compute.provider_tree [None req-e15013f0-6e5f-4c82-bc5b-b40252a0adfd 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Inventory has not changed in ProviderTree for provider: 143c7fe7-af1f-477a-978c-6a994d785d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 09:26:01 compute-0 nova_compute[189491]: 2025-12-01 09:26:01.514 189495 DEBUG nova.scheduler.client.report [None req-e15013f0-6e5f-4c82-bc5b-b40252a0adfd 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Inventory has not changed for provider 143c7fe7-af1f-477a-978c-6a994d785d98 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 09:26:01 compute-0 nova_compute[189491]: 2025-12-01 09:26:01.536 189495 DEBUG oslo_concurrency.lockutils [None req-e15013f0-6e5f-4c82-bc5b-b40252a0adfd 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.327s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:26:01 compute-0 nova_compute[189491]: 2025-12-01 09:26:01.537 189495 DEBUG nova.compute.manager [None req-e15013f0-6e5f-4c82-bc5b-b40252a0adfd 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  1 09:26:01 compute-0 nova_compute[189491]: 2025-12-01 09:26:01.578 189495 DEBUG nova.compute.manager [None req-e15013f0-6e5f-4c82-bc5b-b40252a0adfd 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  1 09:26:01 compute-0 nova_compute[189491]: 2025-12-01 09:26:01.579 189495 DEBUG nova.network.neutron [None req-e15013f0-6e5f-4c82-bc5b-b40252a0adfd 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  1 09:26:01 compute-0 nova_compute[189491]: 2025-12-01 09:26:01.596 189495 INFO nova.virt.libvirt.driver [None req-e15013f0-6e5f-4c82-bc5b-b40252a0adfd 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  1 09:26:01 compute-0 nova_compute[189491]: 2025-12-01 09:26:01.643 189495 DEBUG nova.compute.manager [None req-e15013f0-6e5f-4c82-bc5b-b40252a0adfd 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  1 09:26:01 compute-0 nova_compute[189491]: 2025-12-01 09:26:01.765 189495 DEBUG nova.compute.manager [None req-e15013f0-6e5f-4c82-bc5b-b40252a0adfd 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  1 09:26:01 compute-0 nova_compute[189491]: 2025-12-01 09:26:01.768 189495 DEBUG nova.virt.libvirt.driver [None req-e15013f0-6e5f-4c82-bc5b-b40252a0adfd 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  1 09:26:01 compute-0 nova_compute[189491]: 2025-12-01 09:26:01.769 189495 INFO nova.virt.libvirt.driver [None req-e15013f0-6e5f-4c82-bc5b-b40252a0adfd 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86] Creating image(s)#033[00m
Dec  1 09:26:01 compute-0 nova_compute[189491]: 2025-12-01 09:26:01.770 189495 DEBUG oslo_concurrency.lockutils [None req-e15013f0-6e5f-4c82-bc5b-b40252a0adfd 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Acquiring lock "/var/lib/nova/instances/97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:26:01 compute-0 nova_compute[189491]: 2025-12-01 09:26:01.771 189495 DEBUG oslo_concurrency.lockutils [None req-e15013f0-6e5f-4c82-bc5b-b40252a0adfd 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lock "/var/lib/nova/instances/97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:26:01 compute-0 nova_compute[189491]: 2025-12-01 09:26:01.773 189495 DEBUG oslo_concurrency.lockutils [None req-e15013f0-6e5f-4c82-bc5b-b40252a0adfd 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lock "/var/lib/nova/instances/97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:26:01 compute-0 nova_compute[189491]: 2025-12-01 09:26:01.800 189495 DEBUG oslo_concurrency.processutils [None req-e15013f0-6e5f-4c82-bc5b-b40252a0adfd 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ace19d5a50ba51d9cdb8d0e36f5ab43d2c0f33b5 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:26:01 compute-0 nova_compute[189491]: 2025-12-01 09:26:01.899 189495 DEBUG oslo_concurrency.processutils [None req-e15013f0-6e5f-4c82-bc5b-b40252a0adfd 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ace19d5a50ba51d9cdb8d0e36f5ab43d2c0f33b5 --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:26:01 compute-0 nova_compute[189491]: 2025-12-01 09:26:01.901 189495 DEBUG oslo_concurrency.lockutils [None req-e15013f0-6e5f-4c82-bc5b-b40252a0adfd 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Acquiring lock "ace19d5a50ba51d9cdb8d0e36f5ab43d2c0f33b5" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:26:01 compute-0 nova_compute[189491]: 2025-12-01 09:26:01.903 189495 DEBUG oslo_concurrency.lockutils [None req-e15013f0-6e5f-4c82-bc5b-b40252a0adfd 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lock "ace19d5a50ba51d9cdb8d0e36f5ab43d2c0f33b5" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:26:01 compute-0 nova_compute[189491]: 2025-12-01 09:26:01.938 189495 DEBUG oslo_concurrency.processutils [None req-e15013f0-6e5f-4c82-bc5b-b40252a0adfd 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ace19d5a50ba51d9cdb8d0e36f5ab43d2c0f33b5 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:26:02 compute-0 nova_compute[189491]: 2025-12-01 09:26:02.028 189495 DEBUG oslo_concurrency.processutils [None req-e15013f0-6e5f-4c82-bc5b-b40252a0adfd 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ace19d5a50ba51d9cdb8d0e36f5ab43d2c0f33b5 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:26:02 compute-0 nova_compute[189491]: 2025-12-01 09:26:02.030 189495 DEBUG oslo_concurrency.processutils [None req-e15013f0-6e5f-4c82-bc5b-b40252a0adfd 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ace19d5a50ba51d9cdb8d0e36f5ab43d2c0f33b5,backing_fmt=raw /var/lib/nova/instances/97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:26:02 compute-0 nova_compute[189491]: 2025-12-01 09:26:02.082 189495 DEBUG oslo_concurrency.processutils [None req-e15013f0-6e5f-4c82-bc5b-b40252a0adfd 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ace19d5a50ba51d9cdb8d0e36f5ab43d2c0f33b5,backing_fmt=raw /var/lib/nova/instances/97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk 1073741824" returned: 0 in 0.052s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:26:02 compute-0 nova_compute[189491]: 2025-12-01 09:26:02.084 189495 DEBUG oslo_concurrency.lockutils [None req-e15013f0-6e5f-4c82-bc5b-b40252a0adfd 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lock "ace19d5a50ba51d9cdb8d0e36f5ab43d2c0f33b5" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.181s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:26:02 compute-0 nova_compute[189491]: 2025-12-01 09:26:02.085 189495 DEBUG oslo_concurrency.processutils [None req-e15013f0-6e5f-4c82-bc5b-b40252a0adfd 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ace19d5a50ba51d9cdb8d0e36f5ab43d2c0f33b5 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:26:02 compute-0 nova_compute[189491]: 2025-12-01 09:26:02.183 189495 DEBUG oslo_concurrency.processutils [None req-e15013f0-6e5f-4c82-bc5b-b40252a0adfd 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ace19d5a50ba51d9cdb8d0e36f5ab43d2c0f33b5 --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:26:02 compute-0 nova_compute[189491]: 2025-12-01 09:26:02.185 189495 DEBUG nova.virt.disk.api [None req-e15013f0-6e5f-4c82-bc5b-b40252a0adfd 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Checking if we can resize image /var/lib/nova/instances/97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Dec  1 09:26:02 compute-0 nova_compute[189491]: 2025-12-01 09:26:02.186 189495 DEBUG oslo_concurrency.processutils [None req-e15013f0-6e5f-4c82-bc5b-b40252a0adfd 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:26:02 compute-0 nova_compute[189491]: 2025-12-01 09:26:02.286 189495 DEBUG oslo_concurrency.processutils [None req-e15013f0-6e5f-4c82-bc5b-b40252a0adfd 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:26:02 compute-0 nova_compute[189491]: 2025-12-01 09:26:02.288 189495 DEBUG nova.virt.disk.api [None req-e15013f0-6e5f-4c82-bc5b-b40252a0adfd 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Cannot resize image /var/lib/nova/instances/97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Dec  1 09:26:02 compute-0 nova_compute[189491]: 2025-12-01 09:26:02.289 189495 DEBUG nova.objects.instance [None req-e15013f0-6e5f-4c82-bc5b-b40252a0adfd 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lazy-loading 'migration_context' on Instance uuid 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 09:26:02 compute-0 nova_compute[189491]: 2025-12-01 09:26:02.313 189495 DEBUG oslo_concurrency.lockutils [None req-e15013f0-6e5f-4c82-bc5b-b40252a0adfd 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Acquiring lock "/var/lib/nova/instances/97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:26:02 compute-0 nova_compute[189491]: 2025-12-01 09:26:02.314 189495 DEBUG oslo_concurrency.lockutils [None req-e15013f0-6e5f-4c82-bc5b-b40252a0adfd 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lock "/var/lib/nova/instances/97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:26:02 compute-0 nova_compute[189491]: 2025-12-01 09:26:02.316 189495 DEBUG oslo_concurrency.lockutils [None req-e15013f0-6e5f-4c82-bc5b-b40252a0adfd 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lock "/var/lib/nova/instances/97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:26:02 compute-0 nova_compute[189491]: 2025-12-01 09:26:02.344 189495 DEBUG oslo_concurrency.processutils [None req-e15013f0-6e5f-4c82-bc5b-b40252a0adfd 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:26:02 compute-0 nova_compute[189491]: 2025-12-01 09:26:02.440 189495 DEBUG oslo_concurrency.processutils [None req-e15013f0-6e5f-4c82-bc5b-b40252a0adfd 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:26:02 compute-0 nova_compute[189491]: 2025-12-01 09:26:02.442 189495 DEBUG oslo_concurrency.lockutils [None req-e15013f0-6e5f-4c82-bc5b-b40252a0adfd 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:26:02 compute-0 nova_compute[189491]: 2025-12-01 09:26:02.443 189495 DEBUG oslo_concurrency.lockutils [None req-e15013f0-6e5f-4c82-bc5b-b40252a0adfd 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:26:02 compute-0 nova_compute[189491]: 2025-12-01 09:26:02.463 189495 DEBUG oslo_concurrency.processutils [None req-e15013f0-6e5f-4c82-bc5b-b40252a0adfd 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:26:02 compute-0 nova_compute[189491]: 2025-12-01 09:26:02.507 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:26:02 compute-0 nova_compute[189491]: 2025-12-01 09:26:02.539 189495 DEBUG oslo_concurrency.processutils [None req-e15013f0-6e5f-4c82-bc5b-b40252a0adfd 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:26:02 compute-0 nova_compute[189491]: 2025-12-01 09:26:02.540 189495 DEBUG oslo_concurrency.processutils [None req-e15013f0-6e5f-4c82-bc5b-b40252a0adfd 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:26:02 compute-0 nova_compute[189491]: 2025-12-01 09:26:02.585 189495 DEBUG oslo_concurrency.processutils [None req-e15013f0-6e5f-4c82-bc5b-b40252a0adfd 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.eph0 1073741824" returned: 0 in 0.045s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:26:02 compute-0 nova_compute[189491]: 2025-12-01 09:26:02.586 189495 DEBUG oslo_concurrency.lockutils [None req-e15013f0-6e5f-4c82-bc5b-b40252a0adfd 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.143s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:26:02 compute-0 nova_compute[189491]: 2025-12-01 09:26:02.587 189495 DEBUG oslo_concurrency.processutils [None req-e15013f0-6e5f-4c82-bc5b-b40252a0adfd 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:26:02 compute-0 nova_compute[189491]: 2025-12-01 09:26:02.642 189495 DEBUG oslo_concurrency.processutils [None req-e15013f0-6e5f-4c82-bc5b-b40252a0adfd 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:26:02 compute-0 nova_compute[189491]: 2025-12-01 09:26:02.643 189495 DEBUG nova.virt.libvirt.driver [None req-e15013f0-6e5f-4c82-bc5b-b40252a0adfd 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  1 09:26:02 compute-0 nova_compute[189491]: 2025-12-01 09:26:02.644 189495 DEBUG nova.virt.libvirt.driver [None req-e15013f0-6e5f-4c82-bc5b-b40252a0adfd 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86] Ensure instance console log exists: /var/lib/nova/instances/97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  1 09:26:02 compute-0 nova_compute[189491]: 2025-12-01 09:26:02.644 189495 DEBUG oslo_concurrency.lockutils [None req-e15013f0-6e5f-4c82-bc5b-b40252a0adfd 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:26:02 compute-0 nova_compute[189491]: 2025-12-01 09:26:02.645 189495 DEBUG oslo_concurrency.lockutils [None req-e15013f0-6e5f-4c82-bc5b-b40252a0adfd 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:26:02 compute-0 nova_compute[189491]: 2025-12-01 09:26:02.645 189495 DEBUG oslo_concurrency.lockutils [None req-e15013f0-6e5f-4c82-bc5b-b40252a0adfd 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:26:02 compute-0 podman[243821]: 2025-12-01 09:26:02.720449549 +0000 UTC m=+0.076074462 container health_status f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2)
Dec  1 09:26:02 compute-0 podman[243820]: 2025-12-01 09:26:02.736870496 +0000 UTC m=+0.089487167 container health_status 110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, managed_by=edpm_ansible, version=9.6, container_name=openstack_network_exporter, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, vcs-type=git, io.openshift.tags=minimal rhel9, name=ubi9-minimal, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., distribution-scope=public)
Dec  1 09:26:05 compute-0 nova_compute[189491]: 2025-12-01 09:26:05.868 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:26:06 compute-0 nova_compute[189491]: 2025-12-01 09:26:06.608 189495 DEBUG nova.network.neutron [None req-e15013f0-6e5f-4c82-bc5b-b40252a0adfd 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86] Successfully updated port: 609b09f2-6c63-41e7-9850-15c0098f35b4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  1 09:26:06 compute-0 nova_compute[189491]: 2025-12-01 09:26:06.635 189495 DEBUG oslo_concurrency.lockutils [None req-e15013f0-6e5f-4c82-bc5b-b40252a0adfd 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Acquiring lock "refresh_cache-97dcaede-87ef-4c1c-a4a8-4ec9587cfe86" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 09:26:06 compute-0 nova_compute[189491]: 2025-12-01 09:26:06.636 189495 DEBUG oslo_concurrency.lockutils [None req-e15013f0-6e5f-4c82-bc5b-b40252a0adfd 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Acquired lock "refresh_cache-97dcaede-87ef-4c1c-a4a8-4ec9587cfe86" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 09:26:06 compute-0 nova_compute[189491]: 2025-12-01 09:26:06.636 189495 DEBUG nova.network.neutron [None req-e15013f0-6e5f-4c82-bc5b-b40252a0adfd 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  1 09:26:06 compute-0 podman[243860]: 2025-12-01 09:26:06.763751057 +0000 UTC m=+0.128264464 container health_status 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Dec  1 09:26:06 compute-0 podman[243861]: 2025-12-01 09:26:06.839425137 +0000 UTC m=+0.196464444 container health_status 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller)
Dec  1 09:26:06 compute-0 nova_compute[189491]: 2025-12-01 09:26:06.854 189495 DEBUG nova.network.neutron [None req-e15013f0-6e5f-4c82-bc5b-b40252a0adfd 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  1 09:26:07 compute-0 nova_compute[189491]: 2025-12-01 09:26:07.110 189495 DEBUG nova.compute.manager [req-1f8d8ec8-9ace-463a-8bcb-033f17770166 req-e8ae8565-0ae7-4f9f-bd58-e1566da41e35 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86] Received event network-changed-609b09f2-6c63-41e7-9850-15c0098f35b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 09:26:07 compute-0 nova_compute[189491]: 2025-12-01 09:26:07.111 189495 DEBUG nova.compute.manager [req-1f8d8ec8-9ace-463a-8bcb-033f17770166 req-e8ae8565-0ae7-4f9f-bd58-e1566da41e35 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86] Refreshing instance network info cache due to event network-changed-609b09f2-6c63-41e7-9850-15c0098f35b4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  1 09:26:07 compute-0 nova_compute[189491]: 2025-12-01 09:26:07.111 189495 DEBUG oslo_concurrency.lockutils [req-1f8d8ec8-9ace-463a-8bcb-033f17770166 req-e8ae8565-0ae7-4f9f-bd58-e1566da41e35 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Acquiring lock "refresh_cache-97dcaede-87ef-4c1c-a4a8-4ec9587cfe86" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 09:26:07 compute-0 nova_compute[189491]: 2025-12-01 09:26:07.509 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:26:08 compute-0 nova_compute[189491]: 2025-12-01 09:26:08.828 189495 DEBUG nova.network.neutron [None req-e15013f0-6e5f-4c82-bc5b-b40252a0adfd 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86] Updating instance_info_cache with network_info: [{"id": "609b09f2-6c63-41e7-9850-15c0098f35b4", "address": "fa:16:3e:40:39:1e", "network": {"id": "52d15875-2a2e-463a-bc5d-8fa6b8466bff", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fac95b8a995a4174bfa966a8d9d9aa01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap609b09f2-6c", "ovs_interfaceid": "609b09f2-6c63-41e7-9850-15c0098f35b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 09:26:08 compute-0 nova_compute[189491]: 2025-12-01 09:26:08.852 189495 DEBUG oslo_concurrency.lockutils [None req-e15013f0-6e5f-4c82-bc5b-b40252a0adfd 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Releasing lock "refresh_cache-97dcaede-87ef-4c1c-a4a8-4ec9587cfe86" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 09:26:08 compute-0 nova_compute[189491]: 2025-12-01 09:26:08.853 189495 DEBUG nova.compute.manager [None req-e15013f0-6e5f-4c82-bc5b-b40252a0adfd 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86] Instance network_info: |[{"id": "609b09f2-6c63-41e7-9850-15c0098f35b4", "address": "fa:16:3e:40:39:1e", "network": {"id": "52d15875-2a2e-463a-bc5d-8fa6b8466bff", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fac95b8a995a4174bfa966a8d9d9aa01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap609b09f2-6c", "ovs_interfaceid": "609b09f2-6c63-41e7-9850-15c0098f35b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  1 09:26:08 compute-0 nova_compute[189491]: 2025-12-01 09:26:08.855 189495 DEBUG oslo_concurrency.lockutils [req-1f8d8ec8-9ace-463a-8bcb-033f17770166 req-e8ae8565-0ae7-4f9f-bd58-e1566da41e35 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Acquired lock "refresh_cache-97dcaede-87ef-4c1c-a4a8-4ec9587cfe86" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 09:26:08 compute-0 nova_compute[189491]: 2025-12-01 09:26:08.856 189495 DEBUG nova.network.neutron [req-1f8d8ec8-9ace-463a-8bcb-033f17770166 req-e8ae8565-0ae7-4f9f-bd58-e1566da41e35 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86] Refreshing network info cache for port 609b09f2-6c63-41e7-9850-15c0098f35b4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  1 09:26:08 compute-0 nova_compute[189491]: 2025-12-01 09:26:08.862 189495 DEBUG nova.virt.libvirt.driver [None req-e15013f0-6e5f-4c82-bc5b-b40252a0adfd 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86] Start _get_guest_xml network_info=[{"id": "609b09f2-6c63-41e7-9850-15c0098f35b4", "address": "fa:16:3e:40:39:1e", "network": {"id": "52d15875-2a2e-463a-bc5d-8fa6b8466bff", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fac95b8a995a4174bfa966a8d9d9aa01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap609b09f2-6c", "ovs_interfaceid": "609b09f2-6c63-41e7-9850-15c0098f35b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-01T09:15:08Z,direct_url=<?>,disk_format='qcow2',id=304c689d-2799-45ae-8166-517d5fd107b2,min_disk=0,min_ram=0,name='cirros',owner='fac95b8a995a4174bfa966a8d9d9aa01',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-01T09:15:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'guest_format': None, 'encryption_options': None, 'device_type': 'disk', 'encryption_secret_uuid': None, 'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'image_id': '304c689d-2799-45ae-8166-517d5fd107b2'}], 'ephemerals': [{'size': 1, 'encrypted': False, 'guest_format': None, 'encryption_options': None, 'device_type': 'disk', 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'device_name': '/dev/vdb', 'encryption_format': None}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  1 09:26:08 compute-0 nova_compute[189491]: 2025-12-01 09:26:08.876 189495 WARNING nova.virt.libvirt.driver [None req-e15013f0-6e5f-4c82-bc5b-b40252a0adfd 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 09:26:08 compute-0 nova_compute[189491]: 2025-12-01 09:26:08.888 189495 DEBUG nova.virt.libvirt.host [None req-e15013f0-6e5f-4c82-bc5b-b40252a0adfd 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  1 09:26:08 compute-0 nova_compute[189491]: 2025-12-01 09:26:08.889 189495 DEBUG nova.virt.libvirt.host [None req-e15013f0-6e5f-4c82-bc5b-b40252a0adfd 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  1 09:26:08 compute-0 nova_compute[189491]: 2025-12-01 09:26:08.895 189495 DEBUG nova.virt.libvirt.host [None req-e15013f0-6e5f-4c82-bc5b-b40252a0adfd 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  1 09:26:08 compute-0 nova_compute[189491]: 2025-12-01 09:26:08.896 189495 DEBUG nova.virt.libvirt.host [None req-e15013f0-6e5f-4c82-bc5b-b40252a0adfd 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  1 09:26:08 compute-0 nova_compute[189491]: 2025-12-01 09:26:08.896 189495 DEBUG nova.virt.libvirt.driver [None req-e15013f0-6e5f-4c82-bc5b-b40252a0adfd 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  1 09:26:08 compute-0 nova_compute[189491]: 2025-12-01 09:26:08.897 189495 DEBUG nova.virt.hardware [None req-e15013f0-6e5f-4c82-bc5b-b40252a0adfd 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-01T09:15:13Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='719a52fe-7f4b-48c0-b9dc-6a91d4ec600c',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-01T09:15:08Z,direct_url=<?>,disk_format='qcow2',id=304c689d-2799-45ae-8166-517d5fd107b2,min_disk=0,min_ram=0,name='cirros',owner='fac95b8a995a4174bfa966a8d9d9aa01',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-01T09:15:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  1 09:26:08 compute-0 nova_compute[189491]: 2025-12-01 09:26:08.898 189495 DEBUG nova.virt.hardware [None req-e15013f0-6e5f-4c82-bc5b-b40252a0adfd 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  1 09:26:08 compute-0 nova_compute[189491]: 2025-12-01 09:26:08.898 189495 DEBUG nova.virt.hardware [None req-e15013f0-6e5f-4c82-bc5b-b40252a0adfd 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  1 09:26:08 compute-0 nova_compute[189491]: 2025-12-01 09:26:08.899 189495 DEBUG nova.virt.hardware [None req-e15013f0-6e5f-4c82-bc5b-b40252a0adfd 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  1 09:26:08 compute-0 nova_compute[189491]: 2025-12-01 09:26:08.899 189495 DEBUG nova.virt.hardware [None req-e15013f0-6e5f-4c82-bc5b-b40252a0adfd 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  1 09:26:08 compute-0 nova_compute[189491]: 2025-12-01 09:26:08.900 189495 DEBUG nova.virt.hardware [None req-e15013f0-6e5f-4c82-bc5b-b40252a0adfd 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  1 09:26:08 compute-0 nova_compute[189491]: 2025-12-01 09:26:08.900 189495 DEBUG nova.virt.hardware [None req-e15013f0-6e5f-4c82-bc5b-b40252a0adfd 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  1 09:26:08 compute-0 nova_compute[189491]: 2025-12-01 09:26:08.900 189495 DEBUG nova.virt.hardware [None req-e15013f0-6e5f-4c82-bc5b-b40252a0adfd 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  1 09:26:08 compute-0 nova_compute[189491]: 2025-12-01 09:26:08.901 189495 DEBUG nova.virt.hardware [None req-e15013f0-6e5f-4c82-bc5b-b40252a0adfd 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  1 09:26:08 compute-0 nova_compute[189491]: 2025-12-01 09:26:08.901 189495 DEBUG nova.virt.hardware [None req-e15013f0-6e5f-4c82-bc5b-b40252a0adfd 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  1 09:26:08 compute-0 nova_compute[189491]: 2025-12-01 09:26:08.901 189495 DEBUG nova.virt.hardware [None req-e15013f0-6e5f-4c82-bc5b-b40252a0adfd 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  1 09:26:08 compute-0 nova_compute[189491]: 2025-12-01 09:26:08.906 189495 DEBUG nova.virt.libvirt.vif [None req-e15013f0-6e5f-4c82-bc5b-b40252a0adfd 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-01T09:25:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-a75cfa3-aohxquokylp7-2qxsn2rwux5j-vnf-gncrlbwrk3ge',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-a75cfa3-aohxquokylp7-2qxsn2rwux5j-vnf-gncrlbwrk3ge',id=4,image_ref='304c689d-2799-45ae-8166-517d5fd107b2',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='1555a697-b0aa-4429-98e7-26e6671e228d'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fac95b8a995a4174bfa966a8d9d9aa01',ramdisk_id='',reservation_id='r-gcvg4l82',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,admin,reader',image_base_image_ref='304c689d-2799-45ae-8166-517d5fd107b2',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-01T09:26:01Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0yNTMxMjYzNzI1Nzc4NTIwOTkyPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTI1MzEyNjM3MjU3Nzg1MjA5OTI9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09MjUzMTI2MzcyNTc3ODUyMDk5Mj09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTI1MzEyNjM3MjU3Nzg1MjA5OTI9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0yNTMxMjYzNzI1Nzc4NTIwOTkyPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0yNTMxMjYzNzI1Nzc4NTIwOTkyPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjI
Dec  1 09:26:08 compute-0 nova_compute[189491]: ywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09MjUzMTI2MzcyNTc3ODUyMDk5Mj09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTI1MzEyNjM3MjU3Nzg1MjA5OTI9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0yNTMxMjYzNzI1Nzc4NTIwOTkyPT0tLQo=',user_id='962a55152ff34fdda5eae1f8aee3a7a9',uuid=97dcaede-87ef-4c1c-a4a8-4ec9587cfe86,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "609b09f2-6c63-41e7-9850-15c0098f35b4", "address": "fa:16:3e:40:39:1e", "network": {"id": "52d15875-2a2e-463a-bc5d-8fa6b8466bff", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fac95b8a995a4174bfa966a8d9d9aa01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap609b09f2-6c", "ovs_interfaceid": "609b09f2-6c63-41e7-9850-15c0098f35b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  1 09:26:08 compute-0 nova_compute[189491]: 2025-12-01 09:26:08.906 189495 DEBUG nova.network.os_vif_util [None req-e15013f0-6e5f-4c82-bc5b-b40252a0adfd 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Converting VIF {"id": "609b09f2-6c63-41e7-9850-15c0098f35b4", "address": "fa:16:3e:40:39:1e", "network": {"id": "52d15875-2a2e-463a-bc5d-8fa6b8466bff", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fac95b8a995a4174bfa966a8d9d9aa01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap609b09f2-6c", "ovs_interfaceid": "609b09f2-6c63-41e7-9850-15c0098f35b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 09:26:08 compute-0 nova_compute[189491]: 2025-12-01 09:26:08.907 189495 DEBUG nova.network.os_vif_util [None req-e15013f0-6e5f-4c82-bc5b-b40252a0adfd 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:40:39:1e,bridge_name='br-int',has_traffic_filtering=True,id=609b09f2-6c63-41e7-9850-15c0098f35b4,network=Network(52d15875-2a2e-463a-bc5d-8fa6b8466bff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap609b09f2-6c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 09:26:08 compute-0 nova_compute[189491]: 2025-12-01 09:26:08.908 189495 DEBUG nova.objects.instance [None req-e15013f0-6e5f-4c82-bc5b-b40252a0adfd 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lazy-loading 'pci_devices' on Instance uuid 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 09:26:08 compute-0 nova_compute[189491]: 2025-12-01 09:26:08.924 189495 DEBUG nova.virt.libvirt.driver [None req-e15013f0-6e5f-4c82-bc5b-b40252a0adfd 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86] End _get_guest_xml xml=<domain type="kvm">
Dec  1 09:26:08 compute-0 nova_compute[189491]:  <uuid>97dcaede-87ef-4c1c-a4a8-4ec9587cfe86</uuid>
Dec  1 09:26:08 compute-0 nova_compute[189491]:  <name>instance-00000004</name>
Dec  1 09:26:08 compute-0 nova_compute[189491]:  <memory>524288</memory>
Dec  1 09:26:08 compute-0 nova_compute[189491]:  <vcpu>1</vcpu>
Dec  1 09:26:08 compute-0 nova_compute[189491]:  <metadata>
Dec  1 09:26:08 compute-0 nova_compute[189491]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  1 09:26:08 compute-0 nova_compute[189491]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  1 09:26:08 compute-0 nova_compute[189491]:      <nova:name>vn-a75cfa3-aohxquokylp7-2qxsn2rwux5j-vnf-gncrlbwrk3ge</nova:name>
Dec  1 09:26:08 compute-0 nova_compute[189491]:      <nova:creationTime>2025-12-01 09:26:08</nova:creationTime>
Dec  1 09:26:08 compute-0 nova_compute[189491]:      <nova:flavor name="m1.small">
Dec  1 09:26:08 compute-0 nova_compute[189491]:        <nova:memory>512</nova:memory>
Dec  1 09:26:08 compute-0 nova_compute[189491]:        <nova:disk>1</nova:disk>
Dec  1 09:26:08 compute-0 nova_compute[189491]:        <nova:swap>0</nova:swap>
Dec  1 09:26:08 compute-0 nova_compute[189491]:        <nova:ephemeral>1</nova:ephemeral>
Dec  1 09:26:08 compute-0 nova_compute[189491]:        <nova:vcpus>1</nova:vcpus>
Dec  1 09:26:08 compute-0 nova_compute[189491]:      </nova:flavor>
Dec  1 09:26:08 compute-0 nova_compute[189491]:      <nova:owner>
Dec  1 09:26:08 compute-0 nova_compute[189491]:        <nova:user uuid="962a55152ff34fdda5eae1f8aee3a7a9">admin</nova:user>
Dec  1 09:26:08 compute-0 nova_compute[189491]:        <nova:project uuid="fac95b8a995a4174bfa966a8d9d9aa01">admin</nova:project>
Dec  1 09:26:08 compute-0 nova_compute[189491]:      </nova:owner>
Dec  1 09:26:08 compute-0 nova_compute[189491]:      <nova:root type="image" uuid="304c689d-2799-45ae-8166-517d5fd107b2"/>
Dec  1 09:26:08 compute-0 nova_compute[189491]:      <nova:ports>
Dec  1 09:26:08 compute-0 nova_compute[189491]:        <nova:port uuid="609b09f2-6c63-41e7-9850-15c0098f35b4">
Dec  1 09:26:08 compute-0 nova_compute[189491]:          <nova:ip type="fixed" address="192.168.0.18" ipVersion="4"/>
Dec  1 09:26:08 compute-0 nova_compute[189491]:        </nova:port>
Dec  1 09:26:08 compute-0 nova_compute[189491]:      </nova:ports>
Dec  1 09:26:08 compute-0 nova_compute[189491]:    </nova:instance>
Dec  1 09:26:08 compute-0 nova_compute[189491]:  </metadata>
Dec  1 09:26:08 compute-0 nova_compute[189491]:  <sysinfo type="smbios">
Dec  1 09:26:08 compute-0 nova_compute[189491]:    <system>
Dec  1 09:26:08 compute-0 nova_compute[189491]:      <entry name="manufacturer">RDO</entry>
Dec  1 09:26:08 compute-0 nova_compute[189491]:      <entry name="product">OpenStack Compute</entry>
Dec  1 09:26:08 compute-0 nova_compute[189491]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  1 09:26:08 compute-0 nova_compute[189491]:      <entry name="serial">97dcaede-87ef-4c1c-a4a8-4ec9587cfe86</entry>
Dec  1 09:26:08 compute-0 nova_compute[189491]:      <entry name="uuid">97dcaede-87ef-4c1c-a4a8-4ec9587cfe86</entry>
Dec  1 09:26:08 compute-0 nova_compute[189491]:      <entry name="family">Virtual Machine</entry>
Dec  1 09:26:08 compute-0 nova_compute[189491]:    </system>
Dec  1 09:26:08 compute-0 nova_compute[189491]:  </sysinfo>
Dec  1 09:26:08 compute-0 nova_compute[189491]:  <os>
Dec  1 09:26:08 compute-0 nova_compute[189491]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  1 09:26:08 compute-0 nova_compute[189491]:    <boot dev="hd"/>
Dec  1 09:26:08 compute-0 nova_compute[189491]:    <smbios mode="sysinfo"/>
Dec  1 09:26:08 compute-0 nova_compute[189491]:  </os>
Dec  1 09:26:08 compute-0 nova_compute[189491]:  <features>
Dec  1 09:26:08 compute-0 nova_compute[189491]:    <acpi/>
Dec  1 09:26:08 compute-0 nova_compute[189491]:    <apic/>
Dec  1 09:26:08 compute-0 nova_compute[189491]:    <vmcoreinfo/>
Dec  1 09:26:08 compute-0 nova_compute[189491]:  </features>
Dec  1 09:26:08 compute-0 nova_compute[189491]:  <clock offset="utc">
Dec  1 09:26:08 compute-0 nova_compute[189491]:    <timer name="pit" tickpolicy="delay"/>
Dec  1 09:26:08 compute-0 nova_compute[189491]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  1 09:26:08 compute-0 nova_compute[189491]:    <timer name="hpet" present="no"/>
Dec  1 09:26:08 compute-0 nova_compute[189491]:  </clock>
Dec  1 09:26:08 compute-0 nova_compute[189491]:  <cpu mode="host-model" match="exact">
Dec  1 09:26:08 compute-0 nova_compute[189491]:    <topology sockets="1" cores="1" threads="1"/>
Dec  1 09:26:08 compute-0 nova_compute[189491]:  </cpu>
Dec  1 09:26:08 compute-0 nova_compute[189491]:  <devices>
Dec  1 09:26:08 compute-0 nova_compute[189491]:    <disk type="file" device="disk">
Dec  1 09:26:08 compute-0 nova_compute[189491]:      <driver name="qemu" type="qcow2" cache="none"/>
Dec  1 09:26:08 compute-0 nova_compute[189491]:      <source file="/var/lib/nova/instances/97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk"/>
Dec  1 09:26:08 compute-0 nova_compute[189491]:      <target dev="vda" bus="virtio"/>
Dec  1 09:26:08 compute-0 nova_compute[189491]:    </disk>
Dec  1 09:26:08 compute-0 nova_compute[189491]:    <disk type="file" device="disk">
Dec  1 09:26:08 compute-0 nova_compute[189491]:      <driver name="qemu" type="qcow2" cache="none"/>
Dec  1 09:26:08 compute-0 nova_compute[189491]:      <source file="/var/lib/nova/instances/97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.eph0"/>
Dec  1 09:26:08 compute-0 nova_compute[189491]:      <target dev="vdb" bus="virtio"/>
Dec  1 09:26:08 compute-0 nova_compute[189491]:    </disk>
Dec  1 09:26:08 compute-0 nova_compute[189491]:    <disk type="file" device="cdrom">
Dec  1 09:26:08 compute-0 nova_compute[189491]:      <driver name="qemu" type="raw" cache="none"/>
Dec  1 09:26:08 compute-0 nova_compute[189491]:      <source file="/var/lib/nova/instances/97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.config"/>
Dec  1 09:26:08 compute-0 nova_compute[189491]:      <target dev="sda" bus="sata"/>
Dec  1 09:26:08 compute-0 nova_compute[189491]:    </disk>
Dec  1 09:26:08 compute-0 nova_compute[189491]:    <interface type="ethernet">
Dec  1 09:26:08 compute-0 nova_compute[189491]:      <mac address="fa:16:3e:40:39:1e"/>
Dec  1 09:26:08 compute-0 nova_compute[189491]:      <model type="virtio"/>
Dec  1 09:26:08 compute-0 nova_compute[189491]:      <driver name="vhost" rx_queue_size="512"/>
Dec  1 09:26:08 compute-0 nova_compute[189491]:      <mtu size="1442"/>
Dec  1 09:26:08 compute-0 nova_compute[189491]:      <target dev="tap609b09f2-6c"/>
Dec  1 09:26:08 compute-0 nova_compute[189491]:    </interface>
Dec  1 09:26:08 compute-0 nova_compute[189491]:    <serial type="pty">
Dec  1 09:26:08 compute-0 nova_compute[189491]:      <log file="/var/lib/nova/instances/97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/console.log" append="off"/>
Dec  1 09:26:08 compute-0 nova_compute[189491]:    </serial>
Dec  1 09:26:08 compute-0 nova_compute[189491]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  1 09:26:08 compute-0 nova_compute[189491]:    <video>
Dec  1 09:26:08 compute-0 nova_compute[189491]:      <model type="virtio"/>
Dec  1 09:26:08 compute-0 nova_compute[189491]:    </video>
Dec  1 09:26:08 compute-0 nova_compute[189491]:    <input type="tablet" bus="usb"/>
Dec  1 09:26:08 compute-0 nova_compute[189491]:    <rng model="virtio">
Dec  1 09:26:08 compute-0 nova_compute[189491]:      <backend model="random">/dev/urandom</backend>
Dec  1 09:26:08 compute-0 nova_compute[189491]:    </rng>
Dec  1 09:26:08 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root"/>
Dec  1 09:26:08 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:26:08 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:26:08 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:26:08 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:26:08 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:26:08 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:26:08 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:26:08 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:26:08 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:26:08 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:26:08 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:26:08 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:26:08 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:26:08 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:26:08 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:26:08 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:26:08 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:26:08 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:26:08 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:26:08 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:26:08 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:26:08 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:26:08 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:26:08 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:26:08 compute-0 nova_compute[189491]:    <controller type="usb" index="0"/>
Dec  1 09:26:08 compute-0 nova_compute[189491]:    <memballoon model="virtio">
Dec  1 09:26:08 compute-0 nova_compute[189491]:      <stats period="10"/>
Dec  1 09:26:08 compute-0 nova_compute[189491]:    </memballoon>
Dec  1 09:26:08 compute-0 nova_compute[189491]:  </devices>
Dec  1 09:26:08 compute-0 nova_compute[189491]: </domain>
Dec  1 09:26:08 compute-0 nova_compute[189491]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  1 09:26:08 compute-0 nova_compute[189491]: 2025-12-01 09:26:08.925 189495 DEBUG nova.compute.manager [None req-e15013f0-6e5f-4c82-bc5b-b40252a0adfd 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86] Preparing to wait for external event network-vif-plugged-609b09f2-6c63-41e7-9850-15c0098f35b4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  1 09:26:08 compute-0 nova_compute[189491]: 2025-12-01 09:26:08.925 189495 DEBUG oslo_concurrency.lockutils [None req-e15013f0-6e5f-4c82-bc5b-b40252a0adfd 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Acquiring lock "97dcaede-87ef-4c1c-a4a8-4ec9587cfe86-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:26:08 compute-0 nova_compute[189491]: 2025-12-01 09:26:08.926 189495 DEBUG oslo_concurrency.lockutils [None req-e15013f0-6e5f-4c82-bc5b-b40252a0adfd 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lock "97dcaede-87ef-4c1c-a4a8-4ec9587cfe86-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:26:08 compute-0 nova_compute[189491]: 2025-12-01 09:26:08.926 189495 DEBUG oslo_concurrency.lockutils [None req-e15013f0-6e5f-4c82-bc5b-b40252a0adfd 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lock "97dcaede-87ef-4c1c-a4a8-4ec9587cfe86-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:26:08 compute-0 nova_compute[189491]: 2025-12-01 09:26:08.927 189495 DEBUG nova.virt.libvirt.vif [None req-e15013f0-6e5f-4c82-bc5b-b40252a0adfd 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-01T09:25:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-a75cfa3-aohxquokylp7-2qxsn2rwux5j-vnf-gncrlbwrk3ge',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-a75cfa3-aohxquokylp7-2qxsn2rwux5j-vnf-gncrlbwrk3ge',id=4,image_ref='304c689d-2799-45ae-8166-517d5fd107b2',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='1555a697-b0aa-4429-98e7-26e6671e228d'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fac95b8a995a4174bfa966a8d9d9aa01',ramdisk_id='',reservation_id='r-gcvg4l82',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,admin,reader',image_base_image_ref='304c689d-2799-45ae-8166-517d5fd107b2',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-01T09:26:01Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0yNTMxMjYzNzI1Nzc4NTIwOTkyPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTI1MzEyNjM3MjU3Nzg1MjA5OTI9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09MjUzMTI2MzcyNTc3ODUyMDk5Mj09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTI1MzEyNjM3MjU3Nzg1MjA5OTI9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0yNTMxMjYzNzI1Nzc4NTIwOTkyPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0yNTMxMjYzNzI1Nzc4NTIwOTkyPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJ
Dec  1 09:26:08 compute-0 nova_compute[189491]: wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09MjUzMTI2MzcyNTc3ODUyMDk5Mj09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTI1MzEyNjM3MjU3Nzg1MjA5OTI9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0yNTMxMjYzNzI1Nzc4NTIwOTkyPT0tLQo=',user_id='962a55152ff34fdda5eae1f8aee3a7a9',uuid=97dcaede-87ef-4c1c-a4a8-4ec9587cfe86,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "609b09f2-6c63-41e7-9850-15c0098f35b4", "address": "fa:16:3e:40:39:1e", "network": {"id": "52d15875-2a2e-463a-bc5d-8fa6b8466bff", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fac95b8a995a4174bfa966a8d9d9aa01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap609b09f2-6c", "ovs_interfaceid": "609b09f2-6c63-41e7-9850-15c0098f35b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  1 09:26:08 compute-0 nova_compute[189491]: 2025-12-01 09:26:08.927 189495 DEBUG nova.network.os_vif_util [None req-e15013f0-6e5f-4c82-bc5b-b40252a0adfd 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Converting VIF {"id": "609b09f2-6c63-41e7-9850-15c0098f35b4", "address": "fa:16:3e:40:39:1e", "network": {"id": "52d15875-2a2e-463a-bc5d-8fa6b8466bff", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fac95b8a995a4174bfa966a8d9d9aa01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap609b09f2-6c", "ovs_interfaceid": "609b09f2-6c63-41e7-9850-15c0098f35b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 09:26:08 compute-0 nova_compute[189491]: 2025-12-01 09:26:08.928 189495 DEBUG nova.network.os_vif_util [None req-e15013f0-6e5f-4c82-bc5b-b40252a0adfd 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:40:39:1e,bridge_name='br-int',has_traffic_filtering=True,id=609b09f2-6c63-41e7-9850-15c0098f35b4,network=Network(52d15875-2a2e-463a-bc5d-8fa6b8466bff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap609b09f2-6c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 09:26:08 compute-0 nova_compute[189491]: 2025-12-01 09:26:08.928 189495 DEBUG os_vif [None req-e15013f0-6e5f-4c82-bc5b-b40252a0adfd 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:40:39:1e,bridge_name='br-int',has_traffic_filtering=True,id=609b09f2-6c63-41e7-9850-15c0098f35b4,network=Network(52d15875-2a2e-463a-bc5d-8fa6b8466bff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap609b09f2-6c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  1 09:26:08 compute-0 nova_compute[189491]: 2025-12-01 09:26:08.929 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:26:08 compute-0 nova_compute[189491]: 2025-12-01 09:26:08.930 189495 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:26:08 compute-0 nova_compute[189491]: 2025-12-01 09:26:08.930 189495 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 09:26:08 compute-0 nova_compute[189491]: 2025-12-01 09:26:08.935 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:26:08 compute-0 nova_compute[189491]: 2025-12-01 09:26:08.935 189495 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap609b09f2-6c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:26:08 compute-0 nova_compute[189491]: 2025-12-01 09:26:08.936 189495 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap609b09f2-6c, col_values=(('external_ids', {'iface-id': '609b09f2-6c63-41e7-9850-15c0098f35b4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:40:39:1e', 'vm-uuid': '97dcaede-87ef-4c1c-a4a8-4ec9587cfe86'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:26:08 compute-0 NetworkManager[56318]: <info>  [1764581168.9407] manager: (tap609b09f2-6c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/31)
Dec  1 09:26:08 compute-0 nova_compute[189491]: 2025-12-01 09:26:08.938 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:26:08 compute-0 nova_compute[189491]: 2025-12-01 09:26:08.944 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  1 09:26:08 compute-0 nova_compute[189491]: 2025-12-01 09:26:08.952 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:26:08 compute-0 nova_compute[189491]: 2025-12-01 09:26:08.955 189495 INFO os_vif [None req-e15013f0-6e5f-4c82-bc5b-b40252a0adfd 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:40:39:1e,bridge_name='br-int',has_traffic_filtering=True,id=609b09f2-6c63-41e7-9850-15c0098f35b4,network=Network(52d15875-2a2e-463a-bc5d-8fa6b8466bff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap609b09f2-6c')#033[00m
Dec  1 09:26:09 compute-0 nova_compute[189491]: 2025-12-01 09:26:09.028 189495 DEBUG nova.virt.libvirt.driver [None req-e15013f0-6e5f-4c82-bc5b-b40252a0adfd 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  1 09:26:09 compute-0 nova_compute[189491]: 2025-12-01 09:26:09.029 189495 DEBUG nova.virt.libvirt.driver [None req-e15013f0-6e5f-4c82-bc5b-b40252a0adfd 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  1 09:26:09 compute-0 nova_compute[189491]: 2025-12-01 09:26:09.029 189495 DEBUG nova.virt.libvirt.driver [None req-e15013f0-6e5f-4c82-bc5b-b40252a0adfd 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  1 09:26:09 compute-0 nova_compute[189491]: 2025-12-01 09:26:09.029 189495 DEBUG nova.virt.libvirt.driver [None req-e15013f0-6e5f-4c82-bc5b-b40252a0adfd 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] No VIF found with MAC fa:16:3e:40:39:1e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  1 09:26:09 compute-0 nova_compute[189491]: 2025-12-01 09:26:09.030 189495 INFO nova.virt.libvirt.driver [None req-e15013f0-6e5f-4c82-bc5b-b40252a0adfd 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86] Using config drive#033[00m
Dec  1 09:26:09 compute-0 rsyslogd[236849]: message too long (8192) with configured size 8096, begin of message is: 2025-12-01 09:26:08.906 189495 DEBUG nova.virt.libvirt.vif [None req-e15013f0-6e [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec  1 09:26:09 compute-0 rsyslogd[236849]: message too long (8192) with configured size 8096, begin of message is: 2025-12-01 09:26:08.927 189495 DEBUG nova.virt.libvirt.vif [None req-e15013f0-6e [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec  1 09:26:09 compute-0 nova_compute[189491]: 2025-12-01 09:26:09.763 189495 INFO nova.virt.libvirt.driver [None req-e15013f0-6e5f-4c82-bc5b-b40252a0adfd 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86] Creating config drive at /var/lib/nova/instances/97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.config#033[00m
Dec  1 09:26:09 compute-0 nova_compute[189491]: 2025-12-01 09:26:09.776 189495 DEBUG oslo_concurrency.processutils [None req-e15013f0-6e5f-4c82-bc5b-b40252a0adfd 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphldsbv7c execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:26:09 compute-0 nova_compute[189491]: 2025-12-01 09:26:09.922 189495 DEBUG oslo_concurrency.processutils [None req-e15013f0-6e5f-4c82-bc5b-b40252a0adfd 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphldsbv7c" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:26:10 compute-0 kernel: tap609b09f2-6c: entered promiscuous mode
Dec  1 09:26:10 compute-0 NetworkManager[56318]: <info>  [1764581170.0444] manager: (tap609b09f2-6c): new Tun device (/org/freedesktop/NetworkManager/Devices/32)
Dec  1 09:26:10 compute-0 nova_compute[189491]: 2025-12-01 09:26:10.044 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:26:10 compute-0 ovn_controller[97794]: 2025-12-01T09:26:10Z|00045|binding|INFO|Claiming lport 609b09f2-6c63-41e7-9850-15c0098f35b4 for this chassis.
Dec  1 09:26:10 compute-0 ovn_controller[97794]: 2025-12-01T09:26:10Z|00046|binding|INFO|609b09f2-6c63-41e7-9850-15c0098f35b4: Claiming fa:16:3e:40:39:1e 192.168.0.18
Dec  1 09:26:10 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:26:10.056 106659 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:40:39:1e 192.168.0.18'], port_security=['fa:16:3e:40:39:1e 192.168.0.18'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-vdfkxa75cfa3-aohxquokylp7-2qxsn2rwux5j-port-smaxskxe3vm7', 'neutron:cidrs': '192.168.0.18/24', 'neutron:device_id': '97dcaede-87ef-4c1c-a4a8-4ec9587cfe86', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-52d15875-2a2e-463a-bc5d-8fa6b8466bff', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-vdfkxa75cfa3-aohxquokylp7-2qxsn2rwux5j-port-smaxskxe3vm7', 'neutron:project_id': 'fac95b8a995a4174bfa966a8d9d9aa01', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a5a5e6d4-6211-447f-b3f6-e2120ff69d87', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.213'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=260b7b6c-4405-41e2-9dc8-1595161adaf8, chassis=[<ovs.db.idl.Row object at 0x7ff7f12c3670>], tunnel_key=7, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff7f12c3670>], logical_port=609b09f2-6c63-41e7-9850-15c0098f35b4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 09:26:10 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:26:10.057 106659 INFO neutron.agent.ovn.metadata.agent [-] Port 609b09f2-6c63-41e7-9850-15c0098f35b4 in datapath 52d15875-2a2e-463a-bc5d-8fa6b8466bff bound to our chassis#033[00m
Dec  1 09:26:10 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:26:10.059 106659 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 52d15875-2a2e-463a-bc5d-8fa6b8466bff#033[00m
Dec  1 09:26:10 compute-0 ovn_controller[97794]: 2025-12-01T09:26:10Z|00047|binding|INFO|Setting lport 609b09f2-6c63-41e7-9850-15c0098f35b4 ovn-installed in OVS
Dec  1 09:26:10 compute-0 ovn_controller[97794]: 2025-12-01T09:26:10Z|00048|binding|INFO|Setting lport 609b09f2-6c63-41e7-9850-15c0098f35b4 up in Southbound
Dec  1 09:26:10 compute-0 nova_compute[189491]: 2025-12-01 09:26:10.074 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:26:10 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:26:10.093 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[b0669942-f9c5-4baf-bc84-53c91401a7ce]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:26:10 compute-0 systemd-machined[155812]: New machine qemu-4-instance-00000004.
Dec  1 09:26:10 compute-0 systemd-udevd[243929]: Network interface NamePolicy= disabled on kernel command line.
Dec  1 09:26:10 compute-0 systemd[1]: Started Virtual Machine qemu-4-instance-00000004.
Dec  1 09:26:10 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:26:10.136 239843 DEBUG oslo.privsep.daemon [-] privsep: reply[98fc24fc-421a-4009-a303-f5940c10b541]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:26:10 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:26:10.139 239843 DEBUG oslo.privsep.daemon [-] privsep: reply[786a59c3-54cf-4844-aa66-adf55501cd32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:26:10 compute-0 NetworkManager[56318]: <info>  [1764581170.1411] device (tap609b09f2-6c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  1 09:26:10 compute-0 NetworkManager[56318]: <info>  [1764581170.1460] device (tap609b09f2-6c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  1 09:26:10 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:26:10.175 239843 DEBUG oslo.privsep.daemon [-] privsep: reply[a1ae5e11-f766-433d-af3d-0d04b85f2dbb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:26:10 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:26:10.196 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[a061ce7d-cc7c-4b98-af3c-9e8344175ba8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap52d15875-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d0:8c:a9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 7, 'tx_packets': 9, 'rx_bytes': 574, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 7, 'tx_packets': 9, 'rx_bytes': 574, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 383928, 'reachable_time': 21789, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 243936, 'error': None, 'target': 'ovnmeta-52d15875-2a2e-463a-bc5d-8fa6b8466bff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:26:10 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:26:10.216 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[b1906f6d-f11e-4a12-b8b4-05f0be4764a0]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap52d15875-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 383943, 'tstamp': 383943}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 243942, 'error': None, 'target': 'ovnmeta-52d15875-2a2e-463a-bc5d-8fa6b8466bff', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap52d15875-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 383945, 'tstamp': 383945}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 243942, 'error': None, 'target': 'ovnmeta-52d15875-2a2e-463a-bc5d-8fa6b8466bff', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:26:10 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:26:10.218 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap52d15875-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:26:10 compute-0 nova_compute[189491]: 2025-12-01 09:26:10.220 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:26:10 compute-0 nova_compute[189491]: 2025-12-01 09:26:10.221 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:26:10 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:26:10.222 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap52d15875-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:26:10 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:26:10.222 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 09:26:10 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:26:10.223 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap52d15875-20, col_values=(('external_ids', {'iface-id': 'dbcd2eb8-9722-4ebb-b254-d57f599617d1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:26:10 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:26:10.223 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 09:26:10 compute-0 nova_compute[189491]: 2025-12-01 09:26:10.831 189495 DEBUG nova.compute.manager [req-cbb2abaf-ce30-4ee4-9ba8-0391a0a6387f req-f2076108-f781-4b5d-8543-72054288472d ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86] Received event network-vif-plugged-609b09f2-6c63-41e7-9850-15c0098f35b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 09:26:10 compute-0 nova_compute[189491]: 2025-12-01 09:26:10.832 189495 DEBUG oslo_concurrency.lockutils [req-cbb2abaf-ce30-4ee4-9ba8-0391a0a6387f req-f2076108-f781-4b5d-8543-72054288472d ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Acquiring lock "97dcaede-87ef-4c1c-a4a8-4ec9587cfe86-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:26:10 compute-0 nova_compute[189491]: 2025-12-01 09:26:10.832 189495 DEBUG oslo_concurrency.lockutils [req-cbb2abaf-ce30-4ee4-9ba8-0391a0a6387f req-f2076108-f781-4b5d-8543-72054288472d ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Lock "97dcaede-87ef-4c1c-a4a8-4ec9587cfe86-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:26:10 compute-0 nova_compute[189491]: 2025-12-01 09:26:10.832 189495 DEBUG oslo_concurrency.lockutils [req-cbb2abaf-ce30-4ee4-9ba8-0391a0a6387f req-f2076108-f781-4b5d-8543-72054288472d ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Lock "97dcaede-87ef-4c1c-a4a8-4ec9587cfe86-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:26:10 compute-0 nova_compute[189491]: 2025-12-01 09:26:10.832 189495 DEBUG nova.compute.manager [req-cbb2abaf-ce30-4ee4-9ba8-0391a0a6387f req-f2076108-f781-4b5d-8543-72054288472d ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86] Processing event network-vif-plugged-609b09f2-6c63-41e7-9850-15c0098f35b4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  1 09:26:10 compute-0 nova_compute[189491]: 2025-12-01 09:26:10.889 189495 DEBUG nova.network.neutron [req-1f8d8ec8-9ace-463a-8bcb-033f17770166 req-e8ae8565-0ae7-4f9f-bd58-e1566da41e35 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86] Updated VIF entry in instance network info cache for port 609b09f2-6c63-41e7-9850-15c0098f35b4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  1 09:26:10 compute-0 nova_compute[189491]: 2025-12-01 09:26:10.889 189495 DEBUG nova.network.neutron [req-1f8d8ec8-9ace-463a-8bcb-033f17770166 req-e8ae8565-0ae7-4f9f-bd58-e1566da41e35 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86] Updating instance_info_cache with network_info: [{"id": "609b09f2-6c63-41e7-9850-15c0098f35b4", "address": "fa:16:3e:40:39:1e", "network": {"id": "52d15875-2a2e-463a-bc5d-8fa6b8466bff", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fac95b8a995a4174bfa966a8d9d9aa01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap609b09f2-6c", "ovs_interfaceid": "609b09f2-6c63-41e7-9850-15c0098f35b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 09:26:10 compute-0 nova_compute[189491]: 2025-12-01 09:26:10.904 189495 DEBUG oslo_concurrency.lockutils [req-1f8d8ec8-9ace-463a-8bcb-033f17770166 req-e8ae8565-0ae7-4f9f-bd58-e1566da41e35 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Releasing lock "refresh_cache-97dcaede-87ef-4c1c-a4a8-4ec9587cfe86" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 09:26:11 compute-0 nova_compute[189491]: 2025-12-01 09:26:11.116 189495 DEBUG nova.virt.driver [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] Emitting event <LifecycleEvent: 1764581171.1151726, 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 09:26:11 compute-0 nova_compute[189491]: 2025-12-01 09:26:11.117 189495 INFO nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86] VM Started (Lifecycle Event)#033[00m
Dec  1 09:26:11 compute-0 nova_compute[189491]: 2025-12-01 09:26:11.121 189495 DEBUG nova.compute.manager [None req-e15013f0-6e5f-4c82-bc5b-b40252a0adfd 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  1 09:26:11 compute-0 nova_compute[189491]: 2025-12-01 09:26:11.129 189495 DEBUG nova.virt.libvirt.driver [None req-e15013f0-6e5f-4c82-bc5b-b40252a0adfd 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  1 09:26:11 compute-0 nova_compute[189491]: 2025-12-01 09:26:11.136 189495 INFO nova.virt.libvirt.driver [-] [instance: 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86] Instance spawned successfully.#033[00m
Dec  1 09:26:11 compute-0 nova_compute[189491]: 2025-12-01 09:26:11.137 189495 DEBUG nova.virt.libvirt.driver [None req-e15013f0-6e5f-4c82-bc5b-b40252a0adfd 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  1 09:26:11 compute-0 nova_compute[189491]: 2025-12-01 09:26:11.142 189495 DEBUG nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 09:26:11 compute-0 nova_compute[189491]: 2025-12-01 09:26:11.151 189495 DEBUG nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  1 09:26:11 compute-0 nova_compute[189491]: 2025-12-01 09:26:11.172 189495 DEBUG nova.virt.libvirt.driver [None req-e15013f0-6e5f-4c82-bc5b-b40252a0adfd 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 09:26:11 compute-0 nova_compute[189491]: 2025-12-01 09:26:11.173 189495 DEBUG nova.virt.libvirt.driver [None req-e15013f0-6e5f-4c82-bc5b-b40252a0adfd 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 09:26:11 compute-0 nova_compute[189491]: 2025-12-01 09:26:11.175 189495 DEBUG nova.virt.libvirt.driver [None req-e15013f0-6e5f-4c82-bc5b-b40252a0adfd 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 09:26:11 compute-0 nova_compute[189491]: 2025-12-01 09:26:11.176 189495 DEBUG nova.virt.libvirt.driver [None req-e15013f0-6e5f-4c82-bc5b-b40252a0adfd 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 09:26:11 compute-0 nova_compute[189491]: 2025-12-01 09:26:11.178 189495 DEBUG nova.virt.libvirt.driver [None req-e15013f0-6e5f-4c82-bc5b-b40252a0adfd 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 09:26:11 compute-0 nova_compute[189491]: 2025-12-01 09:26:11.179 189495 DEBUG nova.virt.libvirt.driver [None req-e15013f0-6e5f-4c82-bc5b-b40252a0adfd 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 09:26:11 compute-0 nova_compute[189491]: 2025-12-01 09:26:11.186 189495 INFO nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  1 09:26:11 compute-0 nova_compute[189491]: 2025-12-01 09:26:11.187 189495 DEBUG nova.virt.driver [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] Emitting event <LifecycleEvent: 1764581171.1203902, 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 09:26:11 compute-0 nova_compute[189491]: 2025-12-01 09:26:11.188 189495 INFO nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86] VM Paused (Lifecycle Event)#033[00m
Dec  1 09:26:11 compute-0 nova_compute[189491]: 2025-12-01 09:26:11.231 189495 DEBUG nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 09:26:11 compute-0 nova_compute[189491]: 2025-12-01 09:26:11.241 189495 DEBUG nova.virt.driver [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] Emitting event <LifecycleEvent: 1764581171.125297, 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 09:26:11 compute-0 nova_compute[189491]: 2025-12-01 09:26:11.242 189495 INFO nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86] VM Resumed (Lifecycle Event)#033[00m
Dec  1 09:26:11 compute-0 nova_compute[189491]: 2025-12-01 09:26:11.252 189495 INFO nova.compute.manager [None req-e15013f0-6e5f-4c82-bc5b-b40252a0adfd 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86] Took 9.49 seconds to spawn the instance on the hypervisor.#033[00m
Dec  1 09:26:11 compute-0 nova_compute[189491]: 2025-12-01 09:26:11.252 189495 DEBUG nova.compute.manager [None req-e15013f0-6e5f-4c82-bc5b-b40252a0adfd 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 09:26:11 compute-0 nova_compute[189491]: 2025-12-01 09:26:11.268 189495 DEBUG nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 09:26:11 compute-0 nova_compute[189491]: 2025-12-01 09:26:11.275 189495 DEBUG nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  1 09:26:11 compute-0 nova_compute[189491]: 2025-12-01 09:26:11.308 189495 INFO nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  1 09:26:11 compute-0 nova_compute[189491]: 2025-12-01 09:26:11.333 189495 INFO nova.compute.manager [None req-e15013f0-6e5f-4c82-bc5b-b40252a0adfd 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86] Took 10.16 seconds to build instance.#033[00m
Dec  1 09:26:11 compute-0 nova_compute[189491]: 2025-12-01 09:26:11.351 189495 DEBUG oslo_concurrency.lockutils [None req-e15013f0-6e5f-4c82-bc5b-b40252a0adfd 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lock "97dcaede-87ef-4c1c-a4a8-4ec9587cfe86" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.261s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:26:12 compute-0 nova_compute[189491]: 2025-12-01 09:26:12.512 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:26:12 compute-0 nova_compute[189491]: 2025-12-01 09:26:12.935 189495 DEBUG nova.compute.manager [req-09f895b1-7044-4b72-80d9-12e9ef9d09f5 req-caffb686-d51b-40bc-88f3-9336ea15a3ad ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86] Received event network-vif-plugged-609b09f2-6c63-41e7-9850-15c0098f35b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 09:26:12 compute-0 nova_compute[189491]: 2025-12-01 09:26:12.935 189495 DEBUG oslo_concurrency.lockutils [req-09f895b1-7044-4b72-80d9-12e9ef9d09f5 req-caffb686-d51b-40bc-88f3-9336ea15a3ad ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Acquiring lock "97dcaede-87ef-4c1c-a4a8-4ec9587cfe86-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:26:12 compute-0 nova_compute[189491]: 2025-12-01 09:26:12.936 189495 DEBUG oslo_concurrency.lockutils [req-09f895b1-7044-4b72-80d9-12e9ef9d09f5 req-caffb686-d51b-40bc-88f3-9336ea15a3ad ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Lock "97dcaede-87ef-4c1c-a4a8-4ec9587cfe86-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:26:12 compute-0 nova_compute[189491]: 2025-12-01 09:26:12.936 189495 DEBUG oslo_concurrency.lockutils [req-09f895b1-7044-4b72-80d9-12e9ef9d09f5 req-caffb686-d51b-40bc-88f3-9336ea15a3ad ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Lock "97dcaede-87ef-4c1c-a4a8-4ec9587cfe86-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:26:12 compute-0 nova_compute[189491]: 2025-12-01 09:26:12.936 189495 DEBUG nova.compute.manager [req-09f895b1-7044-4b72-80d9-12e9ef9d09f5 req-caffb686-d51b-40bc-88f3-9336ea15a3ad ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86] No waiting events found dispatching network-vif-plugged-609b09f2-6c63-41e7-9850-15c0098f35b4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 09:26:12 compute-0 nova_compute[189491]: 2025-12-01 09:26:12.936 189495 WARNING nova.compute.manager [req-09f895b1-7044-4b72-80d9-12e9ef9d09f5 req-caffb686-d51b-40bc-88f3-9336ea15a3ad ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86] Received unexpected event network-vif-plugged-609b09f2-6c63-41e7-9850-15c0098f35b4 for instance with vm_state active and task_state None.#033[00m
Dec  1 09:26:13 compute-0 nova_compute[189491]: 2025-12-01 09:26:13.939 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:26:17 compute-0 nova_compute[189491]: 2025-12-01 09:26:17.514 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:26:17 compute-0 podman[243955]: 2025-12-01 09:26:17.62432921 +0000 UTC m=+0.096243410 container health_status 6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  1 09:26:17 compute-0 podman[243956]: 2025-12-01 09:26:17.649506149 +0000 UTC m=+0.130907568 container health_status ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team)
Dec  1 09:26:18 compute-0 nova_compute[189491]: 2025-12-01 09:26:18.944 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:26:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:19.782 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  1 09:26:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:19.783 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  1 09:26:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:19.783 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b020>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:26:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:19.785 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7ff84c98b0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:26:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:19.786 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84dc55100>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:26:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:19.786 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:26:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:19.786 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:26:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:19.786 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:26:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:19.786 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84ca1c260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:26:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:19.787 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:26:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:19.787 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:26:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:19.787 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84ff01af0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:26:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:19.787 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bb30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:26:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:19.787 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:26:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:19.787 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bb60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:26:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:19.787 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:26:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:19.787 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bbc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:26:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:19.788 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:26:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:19.788 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:26:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:19.788 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:26:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:19.788 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:26:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:19.788 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b5f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:26:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:19.788 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:26:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:19.789 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98be60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:26:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:19.789 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84f216690>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:26:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:19.789 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c9896d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:26:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:19.789 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:26:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:19.789 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98a720>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:26:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:19.789 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:26:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:19.797 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '7ed22ffd-011d-48d7-962a-8606e471a59e', 'name': 'test_0', 'flavor': {'id': '719a52fe-7f4b-48c0-b9dc-6a91d4ec600c', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '304c689d-2799-45ae-8166-517d5fd107b2'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'fac95b8a995a4174bfa966a8d9d9aa01', 'user_id': '962a55152ff34fdda5eae1f8aee3a7a9', 'hostId': '8e7812466a0145cddd68754a1627db4b56b0a23f372c34432aca44f1', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  1 09:26:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:19.801 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Dec  1 09:26:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:19.803 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/97dcaede-87ef-4c1c-a4a8-4ec9587cfe86 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}5b15b15c247f410e52837a95689cb091041b96c474d34a98b1d5f06140c01501" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Dec  1 09:26:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:20.754 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1959 Content-Type: application/json Date: Mon, 01 Dec 2025 09:26:19 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-e1d96bb3-3d7e-4b8a-b124-424e004934be x-openstack-request-id: req-e1d96bb3-3d7e-4b8a-b124-424e004934be _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Dec  1 09:26:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:20.755 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "97dcaede-87ef-4c1c-a4a8-4ec9587cfe86", "name": "vn-a75cfa3-aohxquokylp7-2qxsn2rwux5j-vnf-gncrlbwrk3ge", "status": "ACTIVE", "tenant_id": "fac95b8a995a4174bfa966a8d9d9aa01", "user_id": "962a55152ff34fdda5eae1f8aee3a7a9", "metadata": {"metering.server_group": "1555a697-b0aa-4429-98e7-26e6671e228d"}, "hostId": "8e7812466a0145cddd68754a1627db4b56b0a23f372c34432aca44f1", "image": {"id": "304c689d-2799-45ae-8166-517d5fd107b2", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/304c689d-2799-45ae-8166-517d5fd107b2"}]}, "flavor": {"id": "719a52fe-7f4b-48c0-b9dc-6a91d4ec600c", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/719a52fe-7f4b-48c0-b9dc-6a91d4ec600c"}]}, "created": "2025-12-01T09:25:59Z", "updated": "2025-12-01T09:26:11Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.18", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:40:39:1e"}, {"version": 4, "addr": "192.168.122.213", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:40:39:1e"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/97dcaede-87ef-4c1c-a4a8-4ec9587cfe86"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/97dcaede-87ef-4c1c-a4a8-4ec9587cfe86"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-12-01T09:26:11.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000004", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Dec  1 09:26:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:20.756 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/97dcaede-87ef-4c1c-a4a8-4ec9587cfe86 used request id req-e1d96bb3-3d7e-4b8a-b124-424e004934be request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Dec  1 09:26:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:20.760 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '97dcaede-87ef-4c1c-a4a8-4ec9587cfe86', 'name': 'vn-a75cfa3-aohxquokylp7-2qxsn2rwux5j-vnf-gncrlbwrk3ge', 'flavor': {'id': '719a52fe-7f4b-48c0-b9dc-6a91d4ec600c', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '304c689d-2799-45ae-8166-517d5fd107b2'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'fac95b8a995a4174bfa966a8d9d9aa01', 'user_id': '962a55152ff34fdda5eae1f8aee3a7a9', 'hostId': '8e7812466a0145cddd68754a1627db4b56b0a23f372c34432aca44f1', 'status': 'active', 'metadata': {'metering.server_group': '1555a697-b0aa-4429-98e7-26e6671e228d'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  1 09:26:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:20.766 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '11a8e94c-61e3-4805-b688-e4b9121b30ba', 'name': 'vn-a75cfa3-6buvcyjxf2ua-hietjgfclklq-vnf-3mwygpaab4vh', 'flavor': {'id': '719a52fe-7f4b-48c0-b9dc-6a91d4ec600c', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '304c689d-2799-45ae-8166-517d5fd107b2'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'fac95b8a995a4174bfa966a8d9d9aa01', 'user_id': '962a55152ff34fdda5eae1f8aee3a7a9', 'hostId': '8e7812466a0145cddd68754a1627db4b56b0a23f372c34432aca44f1', 'status': 'active', 'metadata': {'metering.server_group': '1555a697-b0aa-4429-98e7-26e6671e228d'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  1 09:26:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:20.772 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '350d2bc4-8489-4a5a-991a-99e32671f962', 'name': 'vn-a75cfa3-5bcj5tw5woc6-eld5euc3zwia-vnf-qwzf3cpwxtqu', 'flavor': {'id': '719a52fe-7f4b-48c0-b9dc-6a91d4ec600c', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '304c689d-2799-45ae-8166-517d5fd107b2'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000003', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'fac95b8a995a4174bfa966a8d9d9aa01', 'user_id': '962a55152ff34fdda5eae1f8aee3a7a9', 'hostId': '8e7812466a0145cddd68754a1627db4b56b0a23f372c34432aca44f1', 'status': 'active', 'metadata': {'metering.server_group': '1555a697-b0aa-4429-98e7-26e6671e228d'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  1 09:26:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:20.773 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  1 09:26:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:20.773 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b020>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:26:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:20.774 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b020>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:26:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:20.774 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:26:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:20.775 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-01T09:26:20.774645) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:26:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:20.909 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:20.911 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:20.912 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.031 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.read.bytes volume: 18348032 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.042 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.read.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.043 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.read.bytes volume: 2048 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.146 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.147 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.149 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.244 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.246 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.247 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.252 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.252 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7ff8501e1d00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.252 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.252 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84dc55100>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.252 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84dc55100>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.252 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.253 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-01T09:26:21.252728) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.291 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.allocation volume: 21831680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.291 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.291 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.341 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.allocation volume: 204800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.341 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.allocation volume: 204800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.341 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.372 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.allocation volume: 21831680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.373 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.374 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.405 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.allocation volume: 21635072 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.406 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.406 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.407 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.408 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7ff84c98b110>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.408 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.408 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b140>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.408 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b140>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.408 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.409 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.read.latency volume: 476643826 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.409 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.read.latency volume: 112985408 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.410 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.read.latency volume: 87581444 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.410 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.read.latency volume: 472883991 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.411 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.read.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.411 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.read.latency volume: 1547208 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.412 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-01T09:26:21.408795) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.412 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.read.latency volume: 469977634 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.412 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.read.latency volume: 95101905 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.413 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.read.latency volume: 74341595 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.414 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.read.latency volume: 451180044 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.414 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.read.latency volume: 71893061 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.415 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.read.latency volume: 57010170 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.416 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.417 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7ff84c98b1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.417 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.417 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.417 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.418 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.418 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.418 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-01T09:26:21.417931) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.418 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.419 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.419 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.usage volume: 196624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.420 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.usage volume: 196624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.420 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.421 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.421 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.422 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.422 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.422 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.423 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.424 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.424 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7ff84c98b200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.425 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.425 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.425 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.425 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.425 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.426 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-01T09:26:21.425549) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.426 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.426 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.427 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.427 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.428 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.428 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.write.bytes volume: 41848832 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.429 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.429 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.430 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.write.bytes volume: 41783296 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.430 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.431 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.432 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.432 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7ff84ca1c230>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.432 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.432 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84ca1c260>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.432 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84ca1c260>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.433 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.433 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-01T09:26:21.432996) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.479 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.519 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.570 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.625 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.626 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.627 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7ff84c98b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.627 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.627 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.627 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.628 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.628 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-01T09:26:21.627884) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.628 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.write.latency volume: 1809136387 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.628 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.write.latency volume: 11785635 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.629 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.629 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.630 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.630 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.631 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.write.latency volume: 1290221611 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.631 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.write.latency volume: 13179146 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.632 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.632 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.write.latency volume: 1311172785 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.634 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.write.latency volume: 7508073 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.635 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.636 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.636 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7ff84c98b2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.636 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.637 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.637 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.637 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.637 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.write.requests volume: 230 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.638 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-01T09:26:21.637390) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.638 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.638 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.639 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.639 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.640 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.640 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.write.requests volume: 242 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.641 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.641 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.642 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.write.requests volume: 229 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.642 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.644 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.645 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.646 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7ff84c98b620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.646 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.646 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84ff01af0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.646 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84ff01af0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.647 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.647 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-01T09:26:21.647064) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.653 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/network.incoming.bytes volume: 2136 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.660 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86 / tap609b09f2-6c inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.660 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/network.incoming.bytes volume: 90 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.666 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/network.incoming.bytes volume: 8364 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.673 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/network.incoming.bytes volume: 1570 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.674 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.674 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7ff84c98b680>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.675 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.675 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bb30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.675 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bb30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.675 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.676 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.676 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-12-01T09:26:21.675711) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.676 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: vn-a75cfa3-aohxquokylp7-2qxsn2rwux5j-vnf-gncrlbwrk3ge>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-a75cfa3-aohxquokylp7-2qxsn2rwux5j-vnf-gncrlbwrk3ge>]
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.676 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7ff84c98b320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.677 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.677 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.677 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.677 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.678 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.679 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7ff84c98b920>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.679 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-01T09:26:21.677532) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.679 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.679 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bb60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.680 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bb60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.680 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.680 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/network.incoming.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.681 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/network.incoming.packets volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.681 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/network.incoming.packets volume: 54 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.682 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/network.incoming.packets volume: 14 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.682 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.683 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7ff84c98b380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.683 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.683 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-01T09:26:21.680320) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.683 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b3b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.684 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b3b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.684 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.685 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.686 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7ff84c98bb90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.686 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-01T09:26:21.684332) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.686 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.686 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bbc0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.686 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bbc0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.687 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.687 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.687 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.688 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-01T09:26:21.687170) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.688 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.689 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.689 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.690 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7ff84c98bbf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.690 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.690 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bc20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.690 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bc20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.690 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.691 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.691 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.692 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.692 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.693 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.693 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7ff84c98bc80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.694 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.694 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-01T09:26:21.690839) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.694 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bcb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.694 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bcb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.694 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.695 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/network.outgoing.bytes volume: 2272 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.695 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-01T09:26:21.694800) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.695 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/network.outgoing.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.696 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/network.outgoing.bytes volume: 7502 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.696 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/network.outgoing.bytes volume: 2216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.697 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.697 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7ff84c98bd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.697 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.698 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bd40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.698 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bd40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.698 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.698 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.699 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.699 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/network.outgoing.bytes.delta volume: 2672 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.700 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/network.outgoing.bytes.delta volume: 2216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.700 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.700 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7ff84c98bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.701 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.701 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bdd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.701 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bdd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.701 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.701 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-01T09:26:21.698458) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.701 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.701 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-12-01T09:26:21.701251) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.701 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: vn-a75cfa3-aohxquokylp7-2qxsn2rwux5j-vnf-gncrlbwrk3ge>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-a75cfa3-aohxquokylp7-2qxsn2rwux5j-vnf-gncrlbwrk3ge>]
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.701 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7ff84c98b5c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.702 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.702 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b5f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.702 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b5f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.702 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.702 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/memory.usage volume: 48.82421875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.702 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/memory.usage volume: Unavailable _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.702 14 WARNING ceilometer.compute.pollsters [-] memory.usage statistic in not available for instance 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86: ceilometer.compute.pollsters.NoVolumeException
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.703 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/memory.usage volume: 48.921875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.703 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/memory.usage volume: 49.046875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.703 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.703 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7ff84dc55040>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.704 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.704 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b650>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.704 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-01T09:26:21.702390) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.704 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b650>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.704 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.704 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.704 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.705 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/network.incoming.bytes.delta volume: 3431 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.705 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/network.incoming.bytes.delta volume: 1480 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.706 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.706 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7ff84c98be30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.706 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.706 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98be60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.706 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98be60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.706 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.706 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.707 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/network.outgoing.packets volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.707 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/network.outgoing.packets volume: 65 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.707 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/network.outgoing.packets volume: 20 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.707 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.708 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7ff8503b1b80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.708 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.708 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84f216690>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.708 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84f216690>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.708 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.708 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/cpu volume: 35550000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.708 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/cpu volume: 9860000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.709 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/cpu volume: 422510000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.709 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/cpu volume: 31290000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.709 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.710 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7ff84dab3f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.710 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.710 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-01T09:26:21.704502) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.710 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c9896d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.710 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-01T09:26:21.706613) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.710 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c9896d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.710 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-01T09:26:21.708579) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.710 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.710 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.711 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.711 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.711 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.712 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.712 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.712 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.712 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.713 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.713 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.713 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.714 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.714 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.714 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7ff84c98bec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.714 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.715 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bef0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.715 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bef0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.715 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.715 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.715 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.715 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.716 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.716 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.716 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7ff84c98b170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.717 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.717 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98a720>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.717 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98a720>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.717 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.717 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.717 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.718 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.718 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.read.requests volume: 573 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.718 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.read.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.718 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.read.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.719 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.719 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.719 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.720 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.720 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.720 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.721 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.721 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7ff84c98bf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.721 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-01T09:26:21.710779) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.721 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.721 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-01T09:26:21.715273) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.721 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-01T09:26:21.717358) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.721 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bf80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.722 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bf80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.722 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.722 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.722 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.722 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.723 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.723 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.724 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-01T09:26:21.722142) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.724 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.724 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.724 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.724 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.725 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.725 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.725 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.725 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.725 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.725 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.725 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.725 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.725 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.725 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.725 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.725 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.725 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.725 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.726 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.726 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.726 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.726 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.726 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.726 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.726 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:26:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:26:21.726 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:26:22 compute-0 nova_compute[189491]: 2025-12-01 09:26:22.520 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:26:23 compute-0 nova_compute[189491]: 2025-12-01 09:26:23.947 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:26:24 compute-0 nova_compute[189491]: 2025-12-01 09:26:24.718 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:26:24 compute-0 nova_compute[189491]: 2025-12-01 09:26:24.719 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 09:26:24 compute-0 podman[244000]: 2025-12-01 09:26:24.728077127 +0000 UTC m=+0.095155083 container health_status e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 09:26:25 compute-0 nova_compute[189491]: 2025-12-01 09:26:25.573 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "refresh_cache-350d2bc4-8489-4a5a-991a-99e32671f962" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 09:26:25 compute-0 nova_compute[189491]: 2025-12-01 09:26:25.574 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquired lock "refresh_cache-350d2bc4-8489-4a5a-991a-99e32671f962" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 09:26:25 compute-0 nova_compute[189491]: 2025-12-01 09:26:25.574 189495 DEBUG nova.network.neutron [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] [instance: 350d2bc4-8489-4a5a-991a-99e32671f962] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  1 09:26:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:26:26.512 106659 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:26:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:26:26.512 106659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:26:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:26:26.513 106659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:26:26 compute-0 nova_compute[189491]: 2025-12-01 09:26:26.975 189495 DEBUG nova.network.neutron [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] [instance: 350d2bc4-8489-4a5a-991a-99e32671f962] Updating instance_info_cache with network_info: [{"id": "a79ae82e-bfbc-4718-a23a-6d99c6057e19", "address": "fa:16:3e:da:68:61", "network": {"id": "52d15875-2a2e-463a-bc5d-8fa6b8466bff", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.209", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fac95b8a995a4174bfa966a8d9d9aa01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa79ae82e-bf", "ovs_interfaceid": "a79ae82e-bfbc-4718-a23a-6d99c6057e19", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 09:26:26 compute-0 nova_compute[189491]: 2025-12-01 09:26:26.997 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Releasing lock "refresh_cache-350d2bc4-8489-4a5a-991a-99e32671f962" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 09:26:26 compute-0 nova_compute[189491]: 2025-12-01 09:26:26.998 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] [instance: 350d2bc4-8489-4a5a-991a-99e32671f962] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  1 09:26:27 compute-0 nova_compute[189491]: 2025-12-01 09:26:26.999 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:26:27 compute-0 nova_compute[189491]: 2025-12-01 09:26:27.524 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:26:27 compute-0 nova_compute[189491]: 2025-12-01 09:26:27.714 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:26:28 compute-0 nova_compute[189491]: 2025-12-01 09:26:28.709 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:26:28 compute-0 nova_compute[189491]: 2025-12-01 09:26:28.713 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:26:28 compute-0 nova_compute[189491]: 2025-12-01 09:26:28.745 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:26:28 compute-0 nova_compute[189491]: 2025-12-01 09:26:28.745 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:26:28 compute-0 nova_compute[189491]: 2025-12-01 09:26:28.745 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:26:28 compute-0 nova_compute[189491]: 2025-12-01 09:26:28.746 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 09:26:28 compute-0 podman[244020]: 2025-12-01 09:26:28.809698942 +0000 UTC m=+0.165817263 container health_status dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  1 09:26:28 compute-0 podman[244021]: 2025-12-01 09:26:28.816336563 +0000 UTC m=+0.167535115 container health_status f2d8639519b361376780abb83cb5a5e341c4f412720fb5852b2a4d261a69c359 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, io.buildah.version=1.29.0, vendor=Red Hat, Inc., vcs-type=git, io.openshift.expose-services=, version=9.4, release-0.7.12=, com.redhat.component=ubi9-container, config_id=edpm, distribution-scope=public, managed_by=edpm_ansible, name=ubi9)
Dec  1 09:26:28 compute-0 nova_compute[189491]: 2025-12-01 09:26:28.876 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:26:28 compute-0 nova_compute[189491]: 2025-12-01 09:26:28.953 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:26:28 compute-0 nova_compute[189491]: 2025-12-01 09:26:28.954 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:26:28 compute-0 nova_compute[189491]: 2025-12-01 09:26:28.972 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:26:29 compute-0 nova_compute[189491]: 2025-12-01 09:26:29.018 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:26:29 compute-0 nova_compute[189491]: 2025-12-01 09:26:29.020 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:26:29 compute-0 nova_compute[189491]: 2025-12-01 09:26:29.120 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk.eph0 --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:26:29 compute-0 nova_compute[189491]: 2025-12-01 09:26:29.123 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:26:29 compute-0 nova_compute[189491]: 2025-12-01 09:26:29.223 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk.eph0 --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:26:29 compute-0 nova_compute[189491]: 2025-12-01 09:26:29.235 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:26:29 compute-0 nova_compute[189491]: 2025-12-01 09:26:29.332 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:26:29 compute-0 nova_compute[189491]: 2025-12-01 09:26:29.334 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:26:29 compute-0 nova_compute[189491]: 2025-12-01 09:26:29.400 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:26:29 compute-0 nova_compute[189491]: 2025-12-01 09:26:29.403 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:26:29 compute-0 nova_compute[189491]: 2025-12-01 09:26:29.472 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.eph0 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:26:29 compute-0 nova_compute[189491]: 2025-12-01 09:26:29.475 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:26:29 compute-0 nova_compute[189491]: 2025-12-01 09:26:29.537 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.eph0 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:26:29 compute-0 nova_compute[189491]: 2025-12-01 09:26:29.549 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:26:29 compute-0 nova_compute[189491]: 2025-12-01 09:26:29.613 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:26:29 compute-0 nova_compute[189491]: 2025-12-01 09:26:29.615 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:26:29 compute-0 nova_compute[189491]: 2025-12-01 09:26:29.681 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:26:29 compute-0 nova_compute[189491]: 2025-12-01 09:26:29.684 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:26:29 compute-0 podman[203700]: time="2025-12-01T09:26:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 09:26:29 compute-0 podman[203700]: @ - - [01/Dec/2025:09:26:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Dec  1 09:26:29 compute-0 nova_compute[189491]: 2025-12-01 09:26:29.750 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.eph0 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:26:29 compute-0 nova_compute[189491]: 2025-12-01 09:26:29.753 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:26:29 compute-0 podman[203700]: @ - - [01/Dec/2025:09:26:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4796 "" "Go-http-client/1.1"
Dec  1 09:26:29 compute-0 nova_compute[189491]: 2025-12-01 09:26:29.819 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.eph0 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:26:29 compute-0 nova_compute[189491]: 2025-12-01 09:26:29.832 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/350d2bc4-8489-4a5a-991a-99e32671f962/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:26:29 compute-0 nova_compute[189491]: 2025-12-01 09:26:29.902 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/350d2bc4-8489-4a5a-991a-99e32671f962/disk --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:26:29 compute-0 nova_compute[189491]: 2025-12-01 09:26:29.905 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/350d2bc4-8489-4a5a-991a-99e32671f962/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:26:29 compute-0 nova_compute[189491]: 2025-12-01 09:26:29.970 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/350d2bc4-8489-4a5a-991a-99e32671f962/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:26:29 compute-0 nova_compute[189491]: 2025-12-01 09:26:29.972 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/350d2bc4-8489-4a5a-991a-99e32671f962/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:26:30 compute-0 nova_compute[189491]: 2025-12-01 09:26:30.060 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/350d2bc4-8489-4a5a-991a-99e32671f962/disk.eph0 --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:26:30 compute-0 nova_compute[189491]: 2025-12-01 09:26:30.062 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/350d2bc4-8489-4a5a-991a-99e32671f962/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:26:30 compute-0 nova_compute[189491]: 2025-12-01 09:26:30.142 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/350d2bc4-8489-4a5a-991a-99e32671f962/disk.eph0 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:26:30 compute-0 nova_compute[189491]: 2025-12-01 09:26:30.672 189495 WARNING nova.virt.libvirt.driver [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 09:26:30 compute-0 nova_compute[189491]: 2025-12-01 09:26:30.674 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4688MB free_disk=72.34193420410156GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 09:26:30 compute-0 nova_compute[189491]: 2025-12-01 09:26:30.674 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:26:30 compute-0 nova_compute[189491]: 2025-12-01 09:26:30.675 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:26:30 compute-0 nova_compute[189491]: 2025-12-01 09:26:30.816 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Instance 7ed22ffd-011d-48d7-962a-8606e471a59e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 09:26:30 compute-0 nova_compute[189491]: 2025-12-01 09:26:30.817 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Instance 11a8e94c-61e3-4805-b688-e4b9121b30ba actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 09:26:30 compute-0 nova_compute[189491]: 2025-12-01 09:26:30.818 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Instance 350d2bc4-8489-4a5a-991a-99e32671f962 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 09:26:30 compute-0 nova_compute[189491]: 2025-12-01 09:26:30.818 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Instance 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 09:26:30 compute-0 nova_compute[189491]: 2025-12-01 09:26:30.818 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 09:26:30 compute-0 nova_compute[189491]: 2025-12-01 09:26:30.819 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=2560MB phys_disk=79GB used_disk=8GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 09:26:30 compute-0 nova_compute[189491]: 2025-12-01 09:26:30.920 189495 DEBUG nova.compute.provider_tree [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Inventory has not changed in ProviderTree for provider: 143c7fe7-af1f-477a-978c-6a994d785d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 09:26:30 compute-0 nova_compute[189491]: 2025-12-01 09:26:30.944 189495 DEBUG nova.scheduler.client.report [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Inventory has not changed for provider 143c7fe7-af1f-477a-978c-6a994d785d98 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 09:26:30 compute-0 nova_compute[189491]: 2025-12-01 09:26:30.979 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 09:26:30 compute-0 nova_compute[189491]: 2025-12-01 09:26:30.980 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.305s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:26:31 compute-0 openstack_network_exporter[205866]: ERROR   09:26:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 09:26:31 compute-0 openstack_network_exporter[205866]: ERROR   09:26:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:26:31 compute-0 openstack_network_exporter[205866]: ERROR   09:26:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:26:31 compute-0 openstack_network_exporter[205866]: ERROR   09:26:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 09:26:31 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:26:31 compute-0 openstack_network_exporter[205866]: ERROR   09:26:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 09:26:31 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:26:32 compute-0 nova_compute[189491]: 2025-12-01 09:26:32.528 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:26:32 compute-0 nova_compute[189491]: 2025-12-01 09:26:32.982 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:26:32 compute-0 nova_compute[189491]: 2025-12-01 09:26:32.983 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:26:32 compute-0 nova_compute[189491]: 2025-12-01 09:26:32.983 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:26:32 compute-0 nova_compute[189491]: 2025-12-01 09:26:32.983 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:26:32 compute-0 nova_compute[189491]: 2025-12-01 09:26:32.983 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 09:26:33 compute-0 podman[244112]: 2025-12-01 09:26:33.764067554 +0000 UTC m=+0.133719576 container health_status 110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, release=1755695350, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., config_id=edpm, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, distribution-scope=public, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, io.openshift.expose-services=)
Dec  1 09:26:33 compute-0 podman[244113]: 2025-12-01 09:26:33.765705544 +0000 UTC m=+0.127184438 container health_status f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  1 09:26:33 compute-0 nova_compute[189491]: 2025-12-01 09:26:33.977 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:26:37 compute-0 nova_compute[189491]: 2025-12-01 09:26:37.533 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:26:37 compute-0 podman[244150]: 2025-12-01 09:26:37.766617096 +0000 UTC m=+0.131297778 container health_status 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd)
Dec  1 09:26:37 compute-0 podman[244151]: 2025-12-01 09:26:37.831425874 +0000 UTC m=+0.197207632 container health_status 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 09:26:38 compute-0 nova_compute[189491]: 2025-12-01 09:26:38.980 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:26:40 compute-0 ovn_controller[97794]: 2025-12-01T09:26:40Z|00049|memory_trim|INFO|Detected inactivity (last active 30004 ms ago): trimming memory
Dec  1 09:26:42 compute-0 nova_compute[189491]: 2025-12-01 09:26:42.534 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:26:43 compute-0 nova_compute[189491]: 2025-12-01 09:26:43.985 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:26:45 compute-0 ovn_controller[97794]: 2025-12-01T09:26:45Z|00010|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:40:39:1e 192.168.0.18
Dec  1 09:26:45 compute-0 ovn_controller[97794]: 2025-12-01T09:26:45Z|00011|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:40:39:1e 192.168.0.18
Dec  1 09:26:47 compute-0 nova_compute[189491]: 2025-12-01 09:26:47.538 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:26:48 compute-0 podman[244209]: 2025-12-01 09:26:48.737412067 +0000 UTC m=+0.094253992 container health_status 6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  1 09:26:48 compute-0 podman[244210]: 2025-12-01 09:26:48.779124005 +0000 UTC m=+0.131022070 container health_status ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, io.buildah.version=1.41.4, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  1 09:26:48 compute-0 nova_compute[189491]: 2025-12-01 09:26:48.991 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:26:52 compute-0 nova_compute[189491]: 2025-12-01 09:26:52.542 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:26:53 compute-0 nova_compute[189491]: 2025-12-01 09:26:53.995 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:26:55 compute-0 podman[244254]: 2025-12-01 09:26:55.761442047 +0000 UTC m=+0.124597031 container health_status e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=edpm, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team)
Dec  1 09:26:57 compute-0 nova_compute[189491]: 2025-12-01 09:26:57.547 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:26:59 compute-0 nova_compute[189491]: 2025-12-01 09:26:58.999 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:26:59 compute-0 podman[203700]: time="2025-12-01T09:26:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 09:26:59 compute-0 podman[244272]: 2025-12-01 09:26:59.739612072 +0000 UTC m=+0.101701756 container health_status dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  1 09:26:59 compute-0 podman[203700]: @ - - [01/Dec/2025:09:26:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Dec  1 09:26:59 compute-0 podman[203700]: @ - - [01/Dec/2025:09:26:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4813 "" "Go-http-client/1.1"
Dec  1 09:26:59 compute-0 podman[244273]: 2025-12-01 09:26:59.784575959 +0000 UTC m=+0.138953068 container health_status f2d8639519b361376780abb83cb5a5e341c4f412720fb5852b2a4d261a69c359 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vendor=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, name=ubi9, managed_by=edpm_ansible, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.tags=base rhel9)
Dec  1 09:27:01 compute-0 openstack_network_exporter[205866]: ERROR   09:27:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 09:27:01 compute-0 openstack_network_exporter[205866]: ERROR   09:27:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:27:01 compute-0 openstack_network_exporter[205866]: ERROR   09:27:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:27:01 compute-0 openstack_network_exporter[205866]: ERROR   09:27:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 09:27:01 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:27:01 compute-0 openstack_network_exporter[205866]: ERROR   09:27:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 09:27:01 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:27:02 compute-0 nova_compute[189491]: 2025-12-01 09:27:02.549 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:27:04 compute-0 nova_compute[189491]: 2025-12-01 09:27:04.004 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:27:04 compute-0 podman[244318]: 2025-12-01 09:27:04.774742247 +0000 UTC m=+0.125934362 container health_status f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec  1 09:27:04 compute-0 podman[244317]: 2025-12-01 09:27:04.786295566 +0000 UTC m=+0.143898367 container health_status 110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, managed_by=edpm_ansible, version=9.6, com.redhat.component=ubi9-minimal-container, config_id=edpm, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Dec  1 09:27:07 compute-0 nova_compute[189491]: 2025-12-01 09:27:07.551 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:27:08 compute-0 podman[244356]: 2025-12-01 09:27:08.725754616 +0000 UTC m=+0.093140280 container health_status 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec  1 09:27:08 compute-0 podman[244357]: 2025-12-01 09:27:08.792229642 +0000 UTC m=+0.163406538 container health_status 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible)
Dec  1 09:27:09 compute-0 nova_compute[189491]: 2025-12-01 09:27:09.007 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:27:12 compute-0 nova_compute[189491]: 2025-12-01 09:27:12.554 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:27:14 compute-0 nova_compute[189491]: 2025-12-01 09:27:14.010 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:27:17 compute-0 nova_compute[189491]: 2025-12-01 09:27:17.557 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:27:19 compute-0 nova_compute[189491]: 2025-12-01 09:27:19.013 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:27:19 compute-0 podman[244404]: 2025-12-01 09:27:19.733905354 +0000 UTC m=+0.091091991 container health_status ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  1 09:27:19 compute-0 podman[244403]: 2025-12-01 09:27:19.75939519 +0000 UTC m=+0.119533729 container health_status 6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  1 09:27:22 compute-0 nova_compute[189491]: 2025-12-01 09:27:22.560 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:27:23 compute-0 nova_compute[189491]: 2025-12-01 09:27:23.718 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:27:23 compute-0 nova_compute[189491]: 2025-12-01 09:27:23.721 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  1 09:27:24 compute-0 nova_compute[189491]: 2025-12-01 09:27:24.018 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:27:24 compute-0 nova_compute[189491]: 2025-12-01 09:27:24.932 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:27:24 compute-0 nova_compute[189491]: 2025-12-01 09:27:24.933 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 09:27:24 compute-0 nova_compute[189491]: 2025-12-01 09:27:24.933 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 09:27:25 compute-0 nova_compute[189491]: 2025-12-01 09:27:25.610 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "refresh_cache-7ed22ffd-011d-48d7-962a-8606e471a59e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 09:27:25 compute-0 nova_compute[189491]: 2025-12-01 09:27:25.610 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquired lock "refresh_cache-7ed22ffd-011d-48d7-962a-8606e471a59e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 09:27:25 compute-0 nova_compute[189491]: 2025-12-01 09:27:25.611 189495 DEBUG nova.network.neutron [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] [instance: 7ed22ffd-011d-48d7-962a-8606e471a59e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  1 09:27:25 compute-0 nova_compute[189491]: 2025-12-01 09:27:25.611 189495 DEBUG nova.objects.instance [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 7ed22ffd-011d-48d7-962a-8606e471a59e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 09:27:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:27:26.513 106659 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:27:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:27:26.514 106659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:27:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:27:26.514 106659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:27:26 compute-0 podman[244449]: 2025-12-01 09:27:26.73419939 +0000 UTC m=+0.100622391 container health_status e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, managed_by=edpm_ansible, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team)
Dec  1 09:27:27 compute-0 nova_compute[189491]: 2025-12-01 09:27:27.175 189495 DEBUG nova.network.neutron [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] [instance: 7ed22ffd-011d-48d7-962a-8606e471a59e] Updating instance_info_cache with network_info: [{"id": "1632735e-15c5-4d6b-a450-baa001b88ac2", "address": "fa:16:3e:d4:bd:b4", "network": {"id": "52d15875-2a2e-463a-bc5d-8fa6b8466bff", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.55", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fac95b8a995a4174bfa966a8d9d9aa01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1632735e-15", "ovs_interfaceid": "1632735e-15c5-4d6b-a450-baa001b88ac2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 09:27:27 compute-0 nova_compute[189491]: 2025-12-01 09:27:27.192 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Releasing lock "refresh_cache-7ed22ffd-011d-48d7-962a-8606e471a59e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 09:27:27 compute-0 nova_compute[189491]: 2025-12-01 09:27:27.192 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] [instance: 7ed22ffd-011d-48d7-962a-8606e471a59e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  1 09:27:27 compute-0 nova_compute[189491]: 2025-12-01 09:27:27.564 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:27:27 compute-0 nova_compute[189491]: 2025-12-01 09:27:27.713 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:27:27 compute-0 nova_compute[189491]: 2025-12-01 09:27:27.713 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:27:27 compute-0 nova_compute[189491]: 2025-12-01 09:27:27.714 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:27:27 compute-0 nova_compute[189491]: 2025-12-01 09:27:27.714 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec  1 09:27:27 compute-0 nova_compute[189491]: 2025-12-01 09:27:27.738 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec  1 09:27:29 compute-0 nova_compute[189491]: 2025-12-01 09:27:29.022 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:27:29 compute-0 nova_compute[189491]: 2025-12-01 09:27:29.733 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:27:29 compute-0 podman[203700]: time="2025-12-01T09:27:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 09:27:29 compute-0 podman[203700]: @ - - [01/Dec/2025:09:27:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Dec  1 09:27:29 compute-0 podman[203700]: @ - - [01/Dec/2025:09:27:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4804 "" "Go-http-client/1.1"
Dec  1 09:27:30 compute-0 nova_compute[189491]: 2025-12-01 09:27:30.714 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:27:30 compute-0 nova_compute[189491]: 2025-12-01 09:27:30.714 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:27:30 compute-0 nova_compute[189491]: 2025-12-01 09:27:30.715 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 09:27:30 compute-0 nova_compute[189491]: 2025-12-01 09:27:30.715 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:27:30 compute-0 nova_compute[189491]: 2025-12-01 09:27:30.741 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:27:30 compute-0 nova_compute[189491]: 2025-12-01 09:27:30.742 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:27:30 compute-0 nova_compute[189491]: 2025-12-01 09:27:30.742 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:27:30 compute-0 nova_compute[189491]: 2025-12-01 09:27:30.742 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 09:27:30 compute-0 podman[244467]: 2025-12-01 09:27:30.775897597 +0000 UTC m=+0.126757163 container health_status f2d8639519b361376780abb83cb5a5e341c4f412720fb5852b2a4d261a69c359 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, maintainer=Red Hat, Inc., container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, vcs-type=git, vendor=Red Hat, Inc., version=9.4, managed_by=edpm_ansible, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, release-0.7.12=, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, config_id=edpm, summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  1 09:27:30 compute-0 podman[244466]: 2025-12-01 09:27:30.785878378 +0000 UTC m=+0.140978786 container health_status dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  1 09:27:30 compute-0 nova_compute[189491]: 2025-12-01 09:27:30.867 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:27:30 compute-0 nova_compute[189491]: 2025-12-01 09:27:30.943 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:27:30 compute-0 nova_compute[189491]: 2025-12-01 09:27:30.944 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:27:31 compute-0 nova_compute[189491]: 2025-12-01 09:27:31.002 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:27:31 compute-0 nova_compute[189491]: 2025-12-01 09:27:31.004 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:27:31 compute-0 nova_compute[189491]: 2025-12-01 09:27:31.093 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk.eph0 --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:27:31 compute-0 nova_compute[189491]: 2025-12-01 09:27:31.094 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:27:31 compute-0 nova_compute[189491]: 2025-12-01 09:27:31.152 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk.eph0 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:27:31 compute-0 nova_compute[189491]: 2025-12-01 09:27:31.163 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:27:31 compute-0 nova_compute[189491]: 2025-12-01 09:27:31.223 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:27:31 compute-0 nova_compute[189491]: 2025-12-01 09:27:31.225 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:27:31 compute-0 nova_compute[189491]: 2025-12-01 09:27:31.282 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:27:31 compute-0 nova_compute[189491]: 2025-12-01 09:27:31.283 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:27:31 compute-0 nova_compute[189491]: 2025-12-01 09:27:31.379 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.eph0 --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:27:31 compute-0 nova_compute[189491]: 2025-12-01 09:27:31.381 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:27:31 compute-0 openstack_network_exporter[205866]: ERROR   09:27:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:27:31 compute-0 openstack_network_exporter[205866]: ERROR   09:27:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:27:31 compute-0 openstack_network_exporter[205866]: ERROR   09:27:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 09:27:31 compute-0 openstack_network_exporter[205866]: ERROR   09:27:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 09:27:31 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:27:31 compute-0 openstack_network_exporter[205866]: ERROR   09:27:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 09:27:31 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:27:31 compute-0 nova_compute[189491]: 2025-12-01 09:27:31.456 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.eph0 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:27:31 compute-0 nova_compute[189491]: 2025-12-01 09:27:31.463 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:27:31 compute-0 nova_compute[189491]: 2025-12-01 09:27:31.556 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:27:31 compute-0 nova_compute[189491]: 2025-12-01 09:27:31.557 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:27:31 compute-0 nova_compute[189491]: 2025-12-01 09:27:31.652 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:27:31 compute-0 nova_compute[189491]: 2025-12-01 09:27:31.653 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:27:31 compute-0 nova_compute[189491]: 2025-12-01 09:27:31.745 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.eph0 --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:27:31 compute-0 nova_compute[189491]: 2025-12-01 09:27:31.747 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:27:31 compute-0 nova_compute[189491]: 2025-12-01 09:27:31.842 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.eph0 --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:27:31 compute-0 nova_compute[189491]: 2025-12-01 09:27:31.851 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/350d2bc4-8489-4a5a-991a-99e32671f962/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:27:31 compute-0 nova_compute[189491]: 2025-12-01 09:27:31.942 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/350d2bc4-8489-4a5a-991a-99e32671f962/disk --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:27:31 compute-0 nova_compute[189491]: 2025-12-01 09:27:31.944 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/350d2bc4-8489-4a5a-991a-99e32671f962/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:27:32 compute-0 nova_compute[189491]: 2025-12-01 09:27:32.051 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/350d2bc4-8489-4a5a-991a-99e32671f962/disk --force-share --output=json" returned: 0 in 0.106s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:27:32 compute-0 nova_compute[189491]: 2025-12-01 09:27:32.053 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/350d2bc4-8489-4a5a-991a-99e32671f962/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:27:32 compute-0 nova_compute[189491]: 2025-12-01 09:27:32.153 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/350d2bc4-8489-4a5a-991a-99e32671f962/disk.eph0 --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:27:32 compute-0 nova_compute[189491]: 2025-12-01 09:27:32.155 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/350d2bc4-8489-4a5a-991a-99e32671f962/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:27:32 compute-0 nova_compute[189491]: 2025-12-01 09:27:32.229 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/350d2bc4-8489-4a5a-991a-99e32671f962/disk.eph0 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:27:32 compute-0 nova_compute[189491]: 2025-12-01 09:27:32.566 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:27:32 compute-0 nova_compute[189491]: 2025-12-01 09:27:32.801 189495 WARNING nova.virt.libvirt.driver [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 09:27:32 compute-0 nova_compute[189491]: 2025-12-01 09:27:32.802 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4591MB free_disk=72.3183708190918GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 09:27:32 compute-0 nova_compute[189491]: 2025-12-01 09:27:32.803 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:27:32 compute-0 nova_compute[189491]: 2025-12-01 09:27:32.803 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:27:32 compute-0 nova_compute[189491]: 2025-12-01 09:27:32.909 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Instance 7ed22ffd-011d-48d7-962a-8606e471a59e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 09:27:32 compute-0 nova_compute[189491]: 2025-12-01 09:27:32.909 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Instance 11a8e94c-61e3-4805-b688-e4b9121b30ba actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 09:27:32 compute-0 nova_compute[189491]: 2025-12-01 09:27:32.909 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Instance 350d2bc4-8489-4a5a-991a-99e32671f962 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 09:27:32 compute-0 nova_compute[189491]: 2025-12-01 09:27:32.910 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Instance 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 09:27:32 compute-0 nova_compute[189491]: 2025-12-01 09:27:32.911 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 09:27:32 compute-0 nova_compute[189491]: 2025-12-01 09:27:32.911 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=2560MB phys_disk=79GB used_disk=8GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 09:27:32 compute-0 nova_compute[189491]: 2025-12-01 09:27:32.957 189495 DEBUG nova.scheduler.client.report [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Refreshing inventories for resource provider 143c7fe7-af1f-477a-978c-6a994d785d98 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  1 09:27:32 compute-0 nova_compute[189491]: 2025-12-01 09:27:32.988 189495 DEBUG nova.scheduler.client.report [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Updating ProviderTree inventory for provider 143c7fe7-af1f-477a-978c-6a994d785d98 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  1 09:27:32 compute-0 nova_compute[189491]: 2025-12-01 09:27:32.988 189495 DEBUG nova.compute.provider_tree [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Updating inventory in ProviderTree for provider 143c7fe7-af1f-477a-978c-6a994d785d98 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  1 09:27:33 compute-0 nova_compute[189491]: 2025-12-01 09:27:33.012 189495 DEBUG nova.scheduler.client.report [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Refreshing aggregate associations for resource provider 143c7fe7-af1f-477a-978c-6a994d785d98, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  1 09:27:33 compute-0 nova_compute[189491]: 2025-12-01 09:27:33.036 189495 DEBUG nova.scheduler.client.report [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Refreshing trait associations for resource provider 143c7fe7-af1f-477a-978c-6a994d785d98, traits: COMPUTE_STORAGE_BUS_FDC,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_FMA3,HW_CPU_X86_SVM,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_AVX,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SHA,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_AVX2,HW_CPU_X86_ABM,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_USB,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_MMX,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE2,COMPUTE_ACCELERATORS,HW_CPU_X86_F16C,HW_CPU_X86_SSE42,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE41,HW_CPU_X86_BMI2,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE4A,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_CIRRUS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  1 09:27:33 compute-0 nova_compute[189491]: 2025-12-01 09:27:33.147 189495 DEBUG nova.compute.provider_tree [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Inventory has not changed in ProviderTree for provider: 143c7fe7-af1f-477a-978c-6a994d785d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 09:27:33 compute-0 nova_compute[189491]: 2025-12-01 09:27:33.169 189495 DEBUG nova.scheduler.client.report [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Inventory has not changed for provider 143c7fe7-af1f-477a-978c-6a994d785d98 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 09:27:33 compute-0 nova_compute[189491]: 2025-12-01 09:27:33.174 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 09:27:33 compute-0 nova_compute[189491]: 2025-12-01 09:27:33.174 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.371s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:27:33 compute-0 nova_compute[189491]: 2025-12-01 09:27:33.175 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:27:34 compute-0 nova_compute[189491]: 2025-12-01 09:27:34.027 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:27:35 compute-0 nova_compute[189491]: 2025-12-01 09:27:35.184 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:27:35 compute-0 nova_compute[189491]: 2025-12-01 09:27:35.220 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:27:35 compute-0 nova_compute[189491]: 2025-12-01 09:27:35.222 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:27:35 compute-0 podman[244557]: 2025-12-01 09:27:35.757346734 +0000 UTC m=+0.110556181 container health_status 110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, version=9.6, config_id=edpm, maintainer=Red Hat, Inc., name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, container_name=openstack_network_exporter, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers)
Dec  1 09:27:35 compute-0 podman[244558]: 2025-12-01 09:27:35.757454257 +0000 UTC m=+0.098879750 container health_status f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  1 09:27:37 compute-0 nova_compute[189491]: 2025-12-01 09:27:37.569 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:27:39 compute-0 nova_compute[189491]: 2025-12-01 09:27:39.031 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:27:39 compute-0 podman[244594]: 2025-12-01 09:27:39.78150509 +0000 UTC m=+0.136598321 container health_status 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  1 09:27:39 compute-0 podman[244595]: 2025-12-01 09:27:39.893601567 +0000 UTC m=+0.242742053 container health_status 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller)
Dec  1 09:27:42 compute-0 nova_compute[189491]: 2025-12-01 09:27:42.571 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:27:44 compute-0 nova_compute[189491]: 2025-12-01 09:27:44.034 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:27:47 compute-0 nova_compute[189491]: 2025-12-01 09:27:47.573 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:27:49 compute-0 nova_compute[189491]: 2025-12-01 09:27:49.037 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:27:50 compute-0 podman[244649]: 2025-12-01 09:27:50.729442287 +0000 UTC m=+0.095419575 container health_status ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, io.buildah.version=1.41.4)
Dec  1 09:27:50 compute-0 podman[244648]: 2025-12-01 09:27:50.733498106 +0000 UTC m=+0.092175898 container health_status 6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  1 09:27:52 compute-0 nova_compute[189491]: 2025-12-01 09:27:52.577 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:27:54 compute-0 nova_compute[189491]: 2025-12-01 09:27:54.042 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:27:57 compute-0 nova_compute[189491]: 2025-12-01 09:27:57.578 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:27:57 compute-0 podman[244690]: 2025-12-01 09:27:57.732551496 +0000 UTC m=+0.106281618 container health_status e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=edpm, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team)
Dec  1 09:27:59 compute-0 nova_compute[189491]: 2025-12-01 09:27:59.047 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:27:59 compute-0 podman[203700]: time="2025-12-01T09:27:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 09:27:59 compute-0 podman[203700]: @ - - [01/Dec/2025:09:27:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Dec  1 09:27:59 compute-0 podman[203700]: @ - - [01/Dec/2025:09:27:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4801 "" "Go-http-client/1.1"
Dec  1 09:28:01 compute-0 openstack_network_exporter[205866]: ERROR   09:28:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:28:01 compute-0 openstack_network_exporter[205866]: ERROR   09:28:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:28:01 compute-0 openstack_network_exporter[205866]: ERROR   09:28:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 09:28:01 compute-0 openstack_network_exporter[205866]: ERROR   09:28:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 09:28:01 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:28:01 compute-0 openstack_network_exporter[205866]: ERROR   09:28:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 09:28:01 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:28:01 compute-0 podman[244707]: 2025-12-01 09:28:01.747747862 +0000 UTC m=+0.098491470 container health_status dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  1 09:28:01 compute-0 podman[244708]: 2025-12-01 09:28:01.755575961 +0000 UTC m=+0.105686294 container health_status f2d8639519b361376780abb83cb5a5e341c4f412720fb5852b2a4d261a69c359 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, release-0.7.12=, io.openshift.tags=base rhel9, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, distribution-scope=public, maintainer=Red Hat, Inc., config_id=edpm, io.openshift.expose-services=, vendor=Red Hat, Inc., container_name=kepler, io.buildah.version=1.29.0, version=9.4, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release=1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec  1 09:28:02 compute-0 nova_compute[189491]: 2025-12-01 09:28:02.111 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:28:02 compute-0 nova_compute[189491]: 2025-12-01 09:28:02.149 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Triggering sync for uuid 7ed22ffd-011d-48d7-962a-8606e471a59e _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Dec  1 09:28:02 compute-0 nova_compute[189491]: 2025-12-01 09:28:02.150 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Triggering sync for uuid 11a8e94c-61e3-4805-b688-e4b9121b30ba _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Dec  1 09:28:02 compute-0 nova_compute[189491]: 2025-12-01 09:28:02.151 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Triggering sync for uuid 350d2bc4-8489-4a5a-991a-99e32671f962 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Dec  1 09:28:02 compute-0 nova_compute[189491]: 2025-12-01 09:28:02.152 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Triggering sync for uuid 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Dec  1 09:28:02 compute-0 nova_compute[189491]: 2025-12-01 09:28:02.153 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "7ed22ffd-011d-48d7-962a-8606e471a59e" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:28:02 compute-0 nova_compute[189491]: 2025-12-01 09:28:02.154 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "7ed22ffd-011d-48d7-962a-8606e471a59e" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:28:02 compute-0 nova_compute[189491]: 2025-12-01 09:28:02.155 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "11a8e94c-61e3-4805-b688-e4b9121b30ba" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:28:02 compute-0 nova_compute[189491]: 2025-12-01 09:28:02.155 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "11a8e94c-61e3-4805-b688-e4b9121b30ba" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:28:02 compute-0 nova_compute[189491]: 2025-12-01 09:28:02.156 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "350d2bc4-8489-4a5a-991a-99e32671f962" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:28:02 compute-0 nova_compute[189491]: 2025-12-01 09:28:02.157 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "350d2bc4-8489-4a5a-991a-99e32671f962" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:28:02 compute-0 nova_compute[189491]: 2025-12-01 09:28:02.158 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "97dcaede-87ef-4c1c-a4a8-4ec9587cfe86" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:28:02 compute-0 nova_compute[189491]: 2025-12-01 09:28:02.159 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "97dcaede-87ef-4c1c-a4a8-4ec9587cfe86" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:28:02 compute-0 nova_compute[189491]: 2025-12-01 09:28:02.189 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "7ed22ffd-011d-48d7-962a-8606e471a59e" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.035s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:28:02 compute-0 nova_compute[189491]: 2025-12-01 09:28:02.230 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "350d2bc4-8489-4a5a-991a-99e32671f962" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.073s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:28:02 compute-0 nova_compute[189491]: 2025-12-01 09:28:02.231 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "11a8e94c-61e3-4805-b688-e4b9121b30ba" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.076s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:28:02 compute-0 nova_compute[189491]: 2025-12-01 09:28:02.242 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "97dcaede-87ef-4c1c-a4a8-4ec9587cfe86" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.084s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:28:02 compute-0 nova_compute[189491]: 2025-12-01 09:28:02.580 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:28:04 compute-0 nova_compute[189491]: 2025-12-01 09:28:04.052 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:28:06 compute-0 podman[244749]: 2025-12-01 09:28:06.73582312 +0000 UTC m=+0.105788146 container health_status 110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.buildah.version=1.33.7, io.openshift.expose-services=, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, version=9.6, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, vcs-type=git, distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter)
Dec  1 09:28:06 compute-0 podman[244750]: 2025-12-01 09:28:06.738418673 +0000 UTC m=+0.099081965 container health_status f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  1 09:28:07 compute-0 nova_compute[189491]: 2025-12-01 09:28:07.583 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:28:09 compute-0 nova_compute[189491]: 2025-12-01 09:28:09.056 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:28:10 compute-0 podman[244785]: 2025-12-01 09:28:10.749298936 +0000 UTC m=+0.113532794 container health_status 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 09:28:10 compute-0 podman[244786]: 2025-12-01 09:28:10.787765675 +0000 UTC m=+0.137979964 container health_status 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 09:28:12 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Dec  1 09:28:12 compute-0 nova_compute[189491]: 2025-12-01 09:28:12.587 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:28:14 compute-0 nova_compute[189491]: 2025-12-01 09:28:14.060 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:28:17 compute-0 nova_compute[189491]: 2025-12-01 09:28:17.590 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:28:19 compute-0 nova_compute[189491]: 2025-12-01 09:28:19.063 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:28:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:19.783 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  1 09:28:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:19.784 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  1 09:28:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:19.784 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b020>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:28:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:19.785 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7ff84c98b0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:28:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:19.785 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84dc55100>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:28:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:19.786 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:28:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:19.786 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:28:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:19.786 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:28:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:19.786 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84ca1c260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:28:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:19.787 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:28:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:19.787 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:28:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:19.787 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84ff01af0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:28:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:19.787 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bb30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:28:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:19.788 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:28:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:19.788 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bb60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:28:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:19.788 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:28:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:19.789 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bbc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:28:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:19.789 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:28:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:19.789 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:28:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:19.789 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:28:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:19.790 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:28:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:19.790 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b5f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:28:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:19.790 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:28:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:19.791 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98be60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:28:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:19.791 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84f216690>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:28:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:19.791 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c9896d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:28:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:19.791 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:28:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:19.792 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98a720>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:28:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:19.792 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:28:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:19.795 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '7ed22ffd-011d-48d7-962a-8606e471a59e', 'name': 'test_0', 'flavor': {'id': '719a52fe-7f4b-48c0-b9dc-6a91d4ec600c', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '304c689d-2799-45ae-8166-517d5fd107b2'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'fac95b8a995a4174bfa966a8d9d9aa01', 'user_id': '962a55152ff34fdda5eae1f8aee3a7a9', 'hostId': '8e7812466a0145cddd68754a1627db4b56b0a23f372c34432aca44f1', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  1 09:28:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:19.799 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '97dcaede-87ef-4c1c-a4a8-4ec9587cfe86', 'name': 'vn-a75cfa3-aohxquokylp7-2qxsn2rwux5j-vnf-gncrlbwrk3ge', 'flavor': {'id': '719a52fe-7f4b-48c0-b9dc-6a91d4ec600c', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '304c689d-2799-45ae-8166-517d5fd107b2'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'fac95b8a995a4174bfa966a8d9d9aa01', 'user_id': '962a55152ff34fdda5eae1f8aee3a7a9', 'hostId': '8e7812466a0145cddd68754a1627db4b56b0a23f372c34432aca44f1', 'status': 'active', 'metadata': {'metering.server_group': '1555a697-b0aa-4429-98e7-26e6671e228d'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  1 09:28:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:19.802 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '11a8e94c-61e3-4805-b688-e4b9121b30ba', 'name': 'vn-a75cfa3-6buvcyjxf2ua-hietjgfclklq-vnf-3mwygpaab4vh', 'flavor': {'id': '719a52fe-7f4b-48c0-b9dc-6a91d4ec600c', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '304c689d-2799-45ae-8166-517d5fd107b2'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'fac95b8a995a4174bfa966a8d9d9aa01', 'user_id': '962a55152ff34fdda5eae1f8aee3a7a9', 'hostId': '8e7812466a0145cddd68754a1627db4b56b0a23f372c34432aca44f1', 'status': 'active', 'metadata': {'metering.server_group': '1555a697-b0aa-4429-98e7-26e6671e228d'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  1 09:28:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:19.806 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '350d2bc4-8489-4a5a-991a-99e32671f962', 'name': 'vn-a75cfa3-5bcj5tw5woc6-eld5euc3zwia-vnf-qwzf3cpwxtqu', 'flavor': {'id': '719a52fe-7f4b-48c0-b9dc-6a91d4ec600c', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '304c689d-2799-45ae-8166-517d5fd107b2'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000003', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'fac95b8a995a4174bfa966a8d9d9aa01', 'user_id': '962a55152ff34fdda5eae1f8aee3a7a9', 'hostId': '8e7812466a0145cddd68754a1627db4b56b0a23f372c34432aca44f1', 'status': 'active', 'metadata': {'metering.server_group': '1555a697-b0aa-4429-98e7-26e6671e228d'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  1 09:28:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:19.806 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  1 09:28:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:19.806 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b020>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:28:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:19.806 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b020>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:28:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:19.807 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:28:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:19.807 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-01T09:28:19.806952) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:28:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:19.901 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:19.902 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:19.902 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:19.977 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:19.977 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:19.978 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.038 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.039 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.039 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.103 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.104 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.104 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.105 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.105 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7ff8501e1d00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.106 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.106 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84dc55100>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.106 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84dc55100>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.106 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.106 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-01T09:28:20.106397) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.132 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.allocation volume: 21831680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.133 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.133 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.155 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.156 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.156 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.177 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.allocation volume: 21831680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.178 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.178 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.203 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.allocation volume: 21635072 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.204 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.204 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.205 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.205 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7ff84c98b110>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.206 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.206 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b140>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.206 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b140>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.206 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.206 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.read.latency volume: 476643826 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.207 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-01T09:28:20.206615) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.208 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.read.latency volume: 112985408 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.209 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.read.latency volume: 87581444 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.209 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.read.latency volume: 623315277 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.209 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.read.latency volume: 99798863 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.210 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.read.latency volume: 80231981 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.210 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.read.latency volume: 469977634 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.211 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.read.latency volume: 95101905 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.211 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.read.latency volume: 74341595 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.212 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.read.latency volume: 451180044 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.212 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.read.latency volume: 71893061 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.213 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.read.latency volume: 57010170 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.214 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.214 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7ff84c98b1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.214 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.215 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.215 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.215 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.215 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.216 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.217 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.217 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-01T09:28:20.215570) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.218 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.219 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.220 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.221 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.222 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.223 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.224 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.225 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.226 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.228 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.229 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7ff84c98b200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.229 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.230 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.230 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.232 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-01T09:28:20.230538) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.231 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.232 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.232 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.233 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.235 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.write.bytes volume: 41840640 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.235 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.236 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.237 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.write.bytes volume: 41852928 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.237 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.237 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.238 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.write.bytes volume: 41783296 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.238 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.238 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.239 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.239 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7ff84ca1c230>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.240 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.240 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84ca1c260>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.240 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84ca1c260>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.240 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.241 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-01T09:28:20.240381) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.268 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.293 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.323 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.347 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.348 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.349 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7ff84c98b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.349 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.349 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.349 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.349 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.349 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.write.latency volume: 1809136387 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.349 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.write.latency volume: 11785635 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.350 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.350 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-01T09:28:20.349490) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.350 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.write.latency volume: 664336258 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.350 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.write.latency volume: 9391906 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.351 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.351 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.write.latency volume: 1291579094 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.351 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.write.latency volume: 13179146 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.352 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.352 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.write.latency volume: 1311172785 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.352 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.write.latency volume: 7508073 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.353 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.353 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.353 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7ff84c98b2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.354 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.354 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.354 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.354 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.354 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.write.requests volume: 230 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.354 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.354 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.355 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.write.requests volume: 231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.355 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.355 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.355 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.write.requests volume: 243 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.356 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.356 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.356 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.write.requests volume: 229 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.357 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.357 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.357 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.358 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7ff84c98b620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.358 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.358 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84ff01af0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.358 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84ff01af0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.358 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.359 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-01T09:28:20.354260) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.359 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-01T09:28:20.358698) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.362 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/network.incoming.bytes volume: 2136 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.365 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/network.incoming.bytes volume: 1486 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.368 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/network.incoming.bytes volume: 8364 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.372 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/network.incoming.bytes volume: 1570 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.372 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.372 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7ff84c98b680>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.372 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.372 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7ff84c98b320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.373 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.373 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.373 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.373 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.374 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.374 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7ff84c98b920>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.374 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.374 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bb60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.374 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bb60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.374 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.374 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/network.incoming.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.375 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/network.incoming.packets volume: 12 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.375 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/network.incoming.packets volume: 54 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.375 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/network.incoming.packets volume: 14 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.376 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.376 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7ff84c98b380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.376 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.376 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b3b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.376 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b3b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.376 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.377 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.376 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-01T09:28:20.373295) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.377 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7ff84c98bb90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.378 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.378 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bbc0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.378 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-01T09:28:20.374670) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.378 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bbc0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.378 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-01T09:28:20.376852) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.378 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.378 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.379 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.379 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.379 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.380 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.380 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7ff84c98bbf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.380 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.380 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bc20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.380 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bc20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.380 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.380 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-01T09:28:20.378581) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.380 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.381 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.381 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.381 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.382 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.382 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7ff84c98bc80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.382 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.382 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bcb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.382 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bcb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.382 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-01T09:28:20.380743) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.383 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.383 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/network.outgoing.bytes volume: 2342 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.383 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-01T09:28:20.382957) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.383 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/network.outgoing.bytes volume: 2216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.383 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/network.outgoing.bytes volume: 7572 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.384 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/network.outgoing.bytes volume: 2286 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.384 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.384 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7ff84c98bd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.384 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.384 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bd40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.385 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bd40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.385 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.385 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.385 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-01T09:28:20.385111) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.385 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/network.outgoing.bytes.delta volume: 2216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.385 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.386 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.386 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.386 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7ff84c98bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.386 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.386 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7ff84c98b5c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.387 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.387 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b5f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.387 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b5f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.387 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.387 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/memory.usage volume: 48.82421875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.387 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-01T09:28:20.387360) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.387 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/memory.usage volume: 49.0390625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.388 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/memory.usage volume: 48.9140625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.388 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/memory.usage volume: 49.046875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.388 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.388 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7ff84dc55040>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.389 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.389 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b650>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.389 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b650>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.389 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.389 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.389 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/network.incoming.bytes.delta volume: 1396 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.390 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.390 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.390 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.390 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7ff84c98be30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.391 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.391 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98be60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.391 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98be60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.391 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.391 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-01T09:28:20.389321) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.391 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.391 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/network.outgoing.packets volume: 20 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.392 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/network.outgoing.packets volume: 66 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.392 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/network.outgoing.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.392 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-01T09:28:20.391436) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.392 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.393 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7ff8503b1b80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.393 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.393 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84f216690>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.393 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84f216690>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.393 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.393 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/cpu volume: 37320000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.393 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/cpu volume: 35260000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.394 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/cpu volume: 424340000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.394 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-01T09:28:20.393420) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.394 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/cpu volume: 33090000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.394 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.395 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7ff84dab3f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.395 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.395 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c9896d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.395 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c9896d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.395 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.395 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.395 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.396 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.396 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.396 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.396 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.397 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.397 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.397 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.397 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.398 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.398 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.399 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.399 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7ff84c98bec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.399 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-01T09:28:20.395445) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.399 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.399 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bef0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.399 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bef0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.399 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.399 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.400 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.400 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.400 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.401 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.401 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7ff84c98b170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.401 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.401 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98a720>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.401 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-01T09:28:20.399779) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.401 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98a720>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.401 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.402 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.402 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-01T09:28:20.401885) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.402 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.402 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.402 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.403 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.403 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.403 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.403 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.404 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.404 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.404 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.405 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.405 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.405 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7ff84c98bf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.405 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.405 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bf80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.406 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bf80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.406 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.406 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.406 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.406 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.407 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.407 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.407 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.408 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.408 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.408 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-01T09:28:20.406141) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.408 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.408 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.408 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.409 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.409 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.409 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.409 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.409 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.409 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.409 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.409 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.409 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.409 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.409 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.409 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.410 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.410 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.410 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.410 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.410 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.410 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.410 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:28:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:28:20.410 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:28:21 compute-0 podman[244836]: 2025-12-01 09:28:21.719602172 +0000 UTC m=+0.083414246 container health_status 6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  1 09:28:21 compute-0 podman[244837]: 2025-12-01 09:28:21.78579836 +0000 UTC m=+0.134921549 container health_status ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec  1 09:28:22 compute-0 nova_compute[189491]: 2025-12-01 09:28:22.594 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:28:24 compute-0 nova_compute[189491]: 2025-12-01 09:28:24.066 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:28:24 compute-0 nova_compute[189491]: 2025-12-01 09:28:24.762 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:28:24 compute-0 nova_compute[189491]: 2025-12-01 09:28:24.762 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 09:28:25 compute-0 nova_compute[189491]: 2025-12-01 09:28:25.659 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "refresh_cache-11a8e94c-61e3-4805-b688-e4b9121b30ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 09:28:25 compute-0 nova_compute[189491]: 2025-12-01 09:28:25.660 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquired lock "refresh_cache-11a8e94c-61e3-4805-b688-e4b9121b30ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 09:28:25 compute-0 nova_compute[189491]: 2025-12-01 09:28:25.660 189495 DEBUG nova.network.neutron [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] [instance: 11a8e94c-61e3-4805-b688-e4b9121b30ba] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  1 09:28:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:28:26.514 106659 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:28:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:28:26.516 106659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:28:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:28:26.517 106659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:28:27 compute-0 nova_compute[189491]: 2025-12-01 09:28:27.596 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:28:27 compute-0 nova_compute[189491]: 2025-12-01 09:28:27.824 189495 DEBUG nova.network.neutron [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] [instance: 11a8e94c-61e3-4805-b688-e4b9121b30ba] Updating instance_info_cache with network_info: [{"id": "213d57d5-9e28-4606-937a-97375a401f82", "address": "fa:16:3e:03:b9:7c", "network": {"id": "52d15875-2a2e-463a-bc5d-8fa6b8466bff", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.178", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fac95b8a995a4174bfa966a8d9d9aa01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap213d57d5-9e", "ovs_interfaceid": "213d57d5-9e28-4606-937a-97375a401f82", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 09:28:28 compute-0 nova_compute[189491]: 2025-12-01 09:28:28.313 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Releasing lock "refresh_cache-11a8e94c-61e3-4805-b688-e4b9121b30ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 09:28:28 compute-0 nova_compute[189491]: 2025-12-01 09:28:28.314 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] [instance: 11a8e94c-61e3-4805-b688-e4b9121b30ba] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  1 09:28:28 compute-0 nova_compute[189491]: 2025-12-01 09:28:28.713 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:28:28 compute-0 podman[244880]: 2025-12-01 09:28:28.740503478 +0000 UTC m=+0.099965645 container health_status e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0)
Dec  1 09:28:29 compute-0 nova_compute[189491]: 2025-12-01 09:28:29.068 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:28:29 compute-0 nova_compute[189491]: 2025-12-01 09:28:29.715 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:28:29 compute-0 podman[203700]: time="2025-12-01T09:28:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 09:28:29 compute-0 podman[203700]: @ - - [01/Dec/2025:09:28:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Dec  1 09:28:29 compute-0 podman[203700]: @ - - [01/Dec/2025:09:28:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4801 "" "Go-http-client/1.1"
Dec  1 09:28:30 compute-0 nova_compute[189491]: 2025-12-01 09:28:30.714 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:28:30 compute-0 nova_compute[189491]: 2025-12-01 09:28:30.715 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:28:30 compute-0 nova_compute[189491]: 2025-12-01 09:28:30.748 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:28:30 compute-0 nova_compute[189491]: 2025-12-01 09:28:30.748 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:28:30 compute-0 nova_compute[189491]: 2025-12-01 09:28:30.749 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:28:30 compute-0 nova_compute[189491]: 2025-12-01 09:28:30.749 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 09:28:30 compute-0 nova_compute[189491]: 2025-12-01 09:28:30.890 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:28:30 compute-0 nova_compute[189491]: 2025-12-01 09:28:30.983 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:28:30 compute-0 nova_compute[189491]: 2025-12-01 09:28:30.986 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:28:31 compute-0 nova_compute[189491]: 2025-12-01 09:28:31.097 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk --force-share --output=json" returned: 0 in 0.111s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:28:31 compute-0 nova_compute[189491]: 2025-12-01 09:28:31.099 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:28:31 compute-0 nova_compute[189491]: 2025-12-01 09:28:31.165 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk.eph0 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:28:31 compute-0 nova_compute[189491]: 2025-12-01 09:28:31.166 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:28:31 compute-0 nova_compute[189491]: 2025-12-01 09:28:31.241 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk.eph0 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:28:31 compute-0 nova_compute[189491]: 2025-12-01 09:28:31.255 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:28:31 compute-0 nova_compute[189491]: 2025-12-01 09:28:31.324 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:28:31 compute-0 nova_compute[189491]: 2025-12-01 09:28:31.325 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:28:31 compute-0 nova_compute[189491]: 2025-12-01 09:28:31.396 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:28:31 compute-0 nova_compute[189491]: 2025-12-01 09:28:31.396 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:28:31 compute-0 openstack_network_exporter[205866]: ERROR   09:28:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:28:31 compute-0 openstack_network_exporter[205866]: ERROR   09:28:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 09:28:31 compute-0 openstack_network_exporter[205866]: ERROR   09:28:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:28:31 compute-0 openstack_network_exporter[205866]: ERROR   09:28:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 09:28:31 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:28:31 compute-0 openstack_network_exporter[205866]: ERROR   09:28:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 09:28:31 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:28:31 compute-0 nova_compute[189491]: 2025-12-01 09:28:31.479 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.eph0 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:28:31 compute-0 nova_compute[189491]: 2025-12-01 09:28:31.481 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:28:31 compute-0 nova_compute[189491]: 2025-12-01 09:28:31.581 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.eph0 --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:28:31 compute-0 nova_compute[189491]: 2025-12-01 09:28:31.593 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:28:31 compute-0 nova_compute[189491]: 2025-12-01 09:28:31.664 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:28:31 compute-0 nova_compute[189491]: 2025-12-01 09:28:31.666 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:28:31 compute-0 nova_compute[189491]: 2025-12-01 09:28:31.733 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:28:31 compute-0 nova_compute[189491]: 2025-12-01 09:28:31.735 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:28:31 compute-0 nova_compute[189491]: 2025-12-01 09:28:31.836 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.eph0 --force-share --output=json" returned: 0 in 0.102s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:28:31 compute-0 nova_compute[189491]: 2025-12-01 09:28:31.838 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:28:31 compute-0 nova_compute[189491]: 2025-12-01 09:28:31.918 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.eph0 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:28:31 compute-0 nova_compute[189491]: 2025-12-01 09:28:31.926 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/350d2bc4-8489-4a5a-991a-99e32671f962/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:28:31 compute-0 nova_compute[189491]: 2025-12-01 09:28:31.994 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/350d2bc4-8489-4a5a-991a-99e32671f962/disk --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:28:31 compute-0 nova_compute[189491]: 2025-12-01 09:28:31.996 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/350d2bc4-8489-4a5a-991a-99e32671f962/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:28:32 compute-0 nova_compute[189491]: 2025-12-01 09:28:32.060 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/350d2bc4-8489-4a5a-991a-99e32671f962/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:28:32 compute-0 nova_compute[189491]: 2025-12-01 09:28:32.061 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/350d2bc4-8489-4a5a-991a-99e32671f962/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:28:32 compute-0 nova_compute[189491]: 2025-12-01 09:28:32.154 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/350d2bc4-8489-4a5a-991a-99e32671f962/disk.eph0 --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:28:32 compute-0 nova_compute[189491]: 2025-12-01 09:28:32.155 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/350d2bc4-8489-4a5a-991a-99e32671f962/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:28:32 compute-0 nova_compute[189491]: 2025-12-01 09:28:32.251 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/350d2bc4-8489-4a5a-991a-99e32671f962/disk.eph0 --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:28:32 compute-0 nova_compute[189491]: 2025-12-01 09:28:32.598 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:28:32 compute-0 nova_compute[189491]: 2025-12-01 09:28:32.704 189495 WARNING nova.virt.libvirt.driver [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 09:28:32 compute-0 nova_compute[189491]: 2025-12-01 09:28:32.705 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4581MB free_disk=72.31846237182617GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 09:28:32 compute-0 nova_compute[189491]: 2025-12-01 09:28:32.705 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:28:32 compute-0 nova_compute[189491]: 2025-12-01 09:28:32.706 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:28:32 compute-0 podman[244952]: 2025-12-01 09:28:32.727160567 +0000 UTC m=+0.095220641 container health_status dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  1 09:28:32 compute-0 podman[244953]: 2025-12-01 09:28:32.740742965 +0000 UTC m=+0.104189287 container health_status f2d8639519b361376780abb83cb5a5e341c4f412720fb5852b2a4d261a69c359 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, release-0.7.12=, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, maintainer=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, name=ubi9, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, io.buildah.version=1.29.0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=edpm, io.openshift.expose-services=, managed_by=edpm_ansible)
Dec  1 09:28:32 compute-0 nova_compute[189491]: 2025-12-01 09:28:32.903 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Instance 7ed22ffd-011d-48d7-962a-8606e471a59e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 09:28:32 compute-0 nova_compute[189491]: 2025-12-01 09:28:32.903 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Instance 11a8e94c-61e3-4805-b688-e4b9121b30ba actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 09:28:32 compute-0 nova_compute[189491]: 2025-12-01 09:28:32.904 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Instance 350d2bc4-8489-4a5a-991a-99e32671f962 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 09:28:32 compute-0 nova_compute[189491]: 2025-12-01 09:28:32.913 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Instance 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 09:28:32 compute-0 nova_compute[189491]: 2025-12-01 09:28:32.913 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 09:28:32 compute-0 nova_compute[189491]: 2025-12-01 09:28:32.913 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=2560MB phys_disk=79GB used_disk=8GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 09:28:33 compute-0 nova_compute[189491]: 2025-12-01 09:28:33.201 189495 DEBUG nova.compute.provider_tree [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Inventory has not changed in ProviderTree for provider: 143c7fe7-af1f-477a-978c-6a994d785d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 09:28:33 compute-0 nova_compute[189491]: 2025-12-01 09:28:33.215 189495 DEBUG nova.scheduler.client.report [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Inventory has not changed for provider 143c7fe7-af1f-477a-978c-6a994d785d98 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 09:28:33 compute-0 nova_compute[189491]: 2025-12-01 09:28:33.217 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 09:28:33 compute-0 nova_compute[189491]: 2025-12-01 09:28:33.217 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.511s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:28:34 compute-0 nova_compute[189491]: 2025-12-01 09:28:34.072 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:28:34 compute-0 nova_compute[189491]: 2025-12-01 09:28:34.211 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:28:34 compute-0 nova_compute[189491]: 2025-12-01 09:28:34.212 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:28:34 compute-0 nova_compute[189491]: 2025-12-01 09:28:34.212 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:28:34 compute-0 nova_compute[189491]: 2025-12-01 09:28:34.213 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:28:34 compute-0 nova_compute[189491]: 2025-12-01 09:28:34.213 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 09:28:37 compute-0 nova_compute[189491]: 2025-12-01 09:28:37.600 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:28:37 compute-0 podman[244994]: 2025-12-01 09:28:37.748936689 +0000 UTC m=+0.106650237 container health_status f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_id=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec  1 09:28:37 compute-0 podman[244993]: 2025-12-01 09:28:37.756872851 +0000 UTC m=+0.112407647 container health_status 110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, distribution-scope=public, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, maintainer=Red Hat, Inc., vendor=Red Hat, Inc.)
Dec  1 09:28:39 compute-0 nova_compute[189491]: 2025-12-01 09:28:39.076 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:28:41 compute-0 podman[245030]: 2025-12-01 09:28:41.75717058 +0000 UTC m=+0.113578635 container health_status 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team)
Dec  1 09:28:41 compute-0 podman[245031]: 2025-12-01 09:28:41.794297846 +0000 UTC m=+0.136687462 container health_status 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 09:28:42 compute-0 nova_compute[189491]: 2025-12-01 09:28:42.603 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:28:44 compute-0 nova_compute[189491]: 2025-12-01 09:28:44.081 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:28:47 compute-0 nova_compute[189491]: 2025-12-01 09:28:47.606 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:28:49 compute-0 nova_compute[189491]: 2025-12-01 09:28:49.085 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:28:52 compute-0 nova_compute[189491]: 2025-12-01 09:28:52.609 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:28:52 compute-0 podman[245075]: 2025-12-01 09:28:52.732502663 +0000 UTC m=+0.092662570 container health_status 6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  1 09:28:52 compute-0 podman[245076]: 2025-12-01 09:28:52.77088592 +0000 UTC m=+0.123619947 container health_status ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true)
Dec  1 09:28:54 compute-0 nova_compute[189491]: 2025-12-01 09:28:54.090 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:28:57 compute-0 nova_compute[189491]: 2025-12-01 09:28:57.612 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:28:59 compute-0 nova_compute[189491]: 2025-12-01 09:28:59.097 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:28:59 compute-0 podman[203700]: time="2025-12-01T09:28:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 09:28:59 compute-0 podman[203700]: @ - - [01/Dec/2025:09:28:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Dec  1 09:28:59 compute-0 podman[203700]: @ - - [01/Dec/2025:09:28:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4806 "" "Go-http-client/1.1"
Dec  1 09:28:59 compute-0 podman[245115]: 2025-12-01 09:28:59.779738652 +0000 UTC m=+0.130989835 container health_status e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2)
Dec  1 09:29:01 compute-0 openstack_network_exporter[205866]: ERROR   09:29:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 09:29:01 compute-0 openstack_network_exporter[205866]: ERROR   09:29:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:29:01 compute-0 openstack_network_exporter[205866]: ERROR   09:29:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:29:01 compute-0 openstack_network_exporter[205866]: ERROR   09:29:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 09:29:01 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:29:01 compute-0 openstack_network_exporter[205866]: ERROR   09:29:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 09:29:01 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:29:02 compute-0 nova_compute[189491]: 2025-12-01 09:29:02.614 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:29:03 compute-0 podman[245133]: 2025-12-01 09:29:03.705704165 +0000 UTC m=+0.073765893 container health_status dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  1 09:29:03 compute-0 podman[245134]: 2025-12-01 09:29:03.753599282 +0000 UTC m=+0.105486389 container health_status f2d8639519b361376780abb83cb5a5e341c4f412720fb5852b2a4d261a69c359 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, version=9.4, container_name=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, name=ubi9, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., release-0.7.12=, vcs-type=git, architecture=x86_64, build-date=2024-09-18T21:23:30, release=1214.1726694543)
Dec  1 09:29:04 compute-0 nova_compute[189491]: 2025-12-01 09:29:04.100 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:29:07 compute-0 nova_compute[189491]: 2025-12-01 09:29:07.617 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:29:08 compute-0 podman[245173]: 2025-12-01 09:29:08.731691797 +0000 UTC m=+0.092284600 container health_status 110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, vcs-type=git, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, architecture=x86_64, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, distribution-scope=public, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., name=ubi9-minimal, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Dec  1 09:29:08 compute-0 podman[245174]: 2025-12-01 09:29:08.732585549 +0000 UTC m=+0.097635449 container health_status f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Dec  1 09:29:09 compute-0 nova_compute[189491]: 2025-12-01 09:29:09.106 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:29:12 compute-0 nova_compute[189491]: 2025-12-01 09:29:12.619 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:29:12 compute-0 podman[245212]: 2025-12-01 09:29:12.752595795 +0000 UTC m=+0.107727493 container health_status 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=multipathd)
Dec  1 09:29:12 compute-0 podman[245213]: 2025-12-01 09:29:12.78882095 +0000 UTC m=+0.147665937 container health_status 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  1 09:29:14 compute-0 nova_compute[189491]: 2025-12-01 09:29:14.110 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:29:17 compute-0 nova_compute[189491]: 2025-12-01 09:29:17.622 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:29:19 compute-0 nova_compute[189491]: 2025-12-01 09:29:19.114 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:29:22 compute-0 nova_compute[189491]: 2025-12-01 09:29:22.626 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:29:23 compute-0 podman[245261]: 2025-12-01 09:29:23.734193523 +0000 UTC m=+0.101312467 container health_status 6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  1 09:29:23 compute-0 podman[245262]: 2025-12-01 09:29:23.752518486 +0000 UTC m=+0.097339232 container health_status ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.4, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec  1 09:29:24 compute-0 nova_compute[189491]: 2025-12-01 09:29:24.118 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:29:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:29:26.515 106659 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:29:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:29:26.517 106659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:29:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:29:26.518 106659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:29:26 compute-0 nova_compute[189491]: 2025-12-01 09:29:26.715 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:29:26 compute-0 nova_compute[189491]: 2025-12-01 09:29:26.716 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 09:29:27 compute-0 nova_compute[189491]: 2025-12-01 09:29:27.628 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:29:27 compute-0 nova_compute[189491]: 2025-12-01 09:29:27.702 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "refresh_cache-350d2bc4-8489-4a5a-991a-99e32671f962" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 09:29:27 compute-0 nova_compute[189491]: 2025-12-01 09:29:27.703 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquired lock "refresh_cache-350d2bc4-8489-4a5a-991a-99e32671f962" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 09:29:27 compute-0 nova_compute[189491]: 2025-12-01 09:29:27.704 189495 DEBUG nova.network.neutron [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] [instance: 350d2bc4-8489-4a5a-991a-99e32671f962] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  1 09:29:29 compute-0 nova_compute[189491]: 2025-12-01 09:29:29.123 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:29:29 compute-0 podman[203700]: time="2025-12-01T09:29:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 09:29:29 compute-0 podman[203700]: @ - - [01/Dec/2025:09:29:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Dec  1 09:29:29 compute-0 podman[203700]: @ - - [01/Dec/2025:09:29:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4806 "" "Go-http-client/1.1"
Dec  1 09:29:29 compute-0 nova_compute[189491]: 2025-12-01 09:29:29.932 189495 DEBUG nova.network.neutron [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] [instance: 350d2bc4-8489-4a5a-991a-99e32671f962] Updating instance_info_cache with network_info: [{"id": "a79ae82e-bfbc-4718-a23a-6d99c6057e19", "address": "fa:16:3e:da:68:61", "network": {"id": "52d15875-2a2e-463a-bc5d-8fa6b8466bff", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.209", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fac95b8a995a4174bfa966a8d9d9aa01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa79ae82e-bf", "ovs_interfaceid": "a79ae82e-bfbc-4718-a23a-6d99c6057e19", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 09:29:29 compute-0 nova_compute[189491]: 2025-12-01 09:29:29.958 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Releasing lock "refresh_cache-350d2bc4-8489-4a5a-991a-99e32671f962" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 09:29:29 compute-0 nova_compute[189491]: 2025-12-01 09:29:29.959 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] [instance: 350d2bc4-8489-4a5a-991a-99e32671f962] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  1 09:29:29 compute-0 nova_compute[189491]: 2025-12-01 09:29:29.959 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:29:30 compute-0 nova_compute[189491]: 2025-12-01 09:29:30.713 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:29:30 compute-0 podman[245304]: 2025-12-01 09:29:30.719372259 +0000 UTC m=+0.094995306 container health_status e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, tcib_managed=true, org.label-schema.build-date=20251125, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  1 09:29:30 compute-0 nova_compute[189491]: 2025-12-01 09:29:30.743 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:29:30 compute-0 nova_compute[189491]: 2025-12-01 09:29:30.744 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:29:30 compute-0 nova_compute[189491]: 2025-12-01 09:29:30.745 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:29:30 compute-0 nova_compute[189491]: 2025-12-01 09:29:30.745 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 09:29:30 compute-0 nova_compute[189491]: 2025-12-01 09:29:30.848 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:29:30 compute-0 nova_compute[189491]: 2025-12-01 09:29:30.910 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:29:30 compute-0 nova_compute[189491]: 2025-12-01 09:29:30.911 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:29:30 compute-0 nova_compute[189491]: 2025-12-01 09:29:30.966 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:29:30 compute-0 nova_compute[189491]: 2025-12-01 09:29:30.967 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:29:31 compute-0 nova_compute[189491]: 2025-12-01 09:29:31.028 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:29:31 compute-0 nova_compute[189491]: 2025-12-01 09:29:31.029 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:29:31 compute-0 nova_compute[189491]: 2025-12-01 09:29:31.089 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk.eph0 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:29:31 compute-0 nova_compute[189491]: 2025-12-01 09:29:31.095 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:29:31 compute-0 nova_compute[189491]: 2025-12-01 09:29:31.152 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:29:31 compute-0 nova_compute[189491]: 2025-12-01 09:29:31.153 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:29:31 compute-0 nova_compute[189491]: 2025-12-01 09:29:31.211 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:29:31 compute-0 nova_compute[189491]: 2025-12-01 09:29:31.212 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:29:31 compute-0 nova_compute[189491]: 2025-12-01 09:29:31.275 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.eph0 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:29:31 compute-0 nova_compute[189491]: 2025-12-01 09:29:31.276 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:29:31 compute-0 nova_compute[189491]: 2025-12-01 09:29:31.336 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.eph0 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:29:31 compute-0 nova_compute[189491]: 2025-12-01 09:29:31.342 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:29:31 compute-0 nova_compute[189491]: 2025-12-01 09:29:31.400 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:29:31 compute-0 nova_compute[189491]: 2025-12-01 09:29:31.402 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:29:31 compute-0 openstack_network_exporter[205866]: ERROR   09:29:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 09:29:31 compute-0 openstack_network_exporter[205866]: ERROR   09:29:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:29:31 compute-0 openstack_network_exporter[205866]: ERROR   09:29:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 09:29:31 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:29:31 compute-0 openstack_network_exporter[205866]: ERROR   09:29:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:29:31 compute-0 openstack_network_exporter[205866]: ERROR   09:29:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 09:29:31 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:29:31 compute-0 nova_compute[189491]: 2025-12-01 09:29:31.501 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:29:31 compute-0 nova_compute[189491]: 2025-12-01 09:29:31.502 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:29:31 compute-0 nova_compute[189491]: 2025-12-01 09:29:31.562 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.eph0 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:29:31 compute-0 nova_compute[189491]: 2025-12-01 09:29:31.569 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:29:31 compute-0 nova_compute[189491]: 2025-12-01 09:29:31.634 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.eph0 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:29:31 compute-0 nova_compute[189491]: 2025-12-01 09:29:31.641 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/350d2bc4-8489-4a5a-991a-99e32671f962/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:29:31 compute-0 nova_compute[189491]: 2025-12-01 09:29:31.715 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/350d2bc4-8489-4a5a-991a-99e32671f962/disk --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:29:31 compute-0 nova_compute[189491]: 2025-12-01 09:29:31.717 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/350d2bc4-8489-4a5a-991a-99e32671f962/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:29:31 compute-0 nova_compute[189491]: 2025-12-01 09:29:31.773 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/350d2bc4-8489-4a5a-991a-99e32671f962/disk --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:29:31 compute-0 nova_compute[189491]: 2025-12-01 09:29:31.774 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/350d2bc4-8489-4a5a-991a-99e32671f962/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:29:31 compute-0 nova_compute[189491]: 2025-12-01 09:29:31.835 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/350d2bc4-8489-4a5a-991a-99e32671f962/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:29:31 compute-0 nova_compute[189491]: 2025-12-01 09:29:31.838 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/350d2bc4-8489-4a5a-991a-99e32671f962/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:29:31 compute-0 nova_compute[189491]: 2025-12-01 09:29:31.903 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/350d2bc4-8489-4a5a-991a-99e32671f962/disk.eph0 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:29:32 compute-0 nova_compute[189491]: 2025-12-01 09:29:32.250 189495 WARNING nova.virt.libvirt.driver [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 09:29:32 compute-0 nova_compute[189491]: 2025-12-01 09:29:32.252 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4572MB free_disk=72.31846237182617GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 09:29:32 compute-0 nova_compute[189491]: 2025-12-01 09:29:32.252 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:29:32 compute-0 nova_compute[189491]: 2025-12-01 09:29:32.253 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:29:32 compute-0 nova_compute[189491]: 2025-12-01 09:29:32.324 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Instance 7ed22ffd-011d-48d7-962a-8606e471a59e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 09:29:32 compute-0 nova_compute[189491]: 2025-12-01 09:29:32.325 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Instance 11a8e94c-61e3-4805-b688-e4b9121b30ba actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 09:29:32 compute-0 nova_compute[189491]: 2025-12-01 09:29:32.325 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Instance 350d2bc4-8489-4a5a-991a-99e32671f962 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 09:29:32 compute-0 nova_compute[189491]: 2025-12-01 09:29:32.326 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Instance 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 09:29:32 compute-0 nova_compute[189491]: 2025-12-01 09:29:32.327 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 09:29:32 compute-0 nova_compute[189491]: 2025-12-01 09:29:32.327 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=2560MB phys_disk=79GB used_disk=8GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 09:29:32 compute-0 nova_compute[189491]: 2025-12-01 09:29:32.419 189495 DEBUG nova.compute.provider_tree [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Inventory has not changed in ProviderTree for provider: 143c7fe7-af1f-477a-978c-6a994d785d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 09:29:32 compute-0 nova_compute[189491]: 2025-12-01 09:29:32.435 189495 DEBUG nova.scheduler.client.report [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Inventory has not changed for provider 143c7fe7-af1f-477a-978c-6a994d785d98 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 09:29:32 compute-0 nova_compute[189491]: 2025-12-01 09:29:32.438 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 09:29:32 compute-0 nova_compute[189491]: 2025-12-01 09:29:32.439 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.186s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:29:32 compute-0 nova_compute[189491]: 2025-12-01 09:29:32.630 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:29:33 compute-0 nova_compute[189491]: 2025-12-01 09:29:33.441 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:29:33 compute-0 nova_compute[189491]: 2025-12-01 09:29:33.441 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:29:33 compute-0 nova_compute[189491]: 2025-12-01 09:29:33.709 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:29:33 compute-0 nova_compute[189491]: 2025-12-01 09:29:33.709 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:29:33 compute-0 nova_compute[189491]: 2025-12-01 09:29:33.735 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:29:34 compute-0 nova_compute[189491]: 2025-12-01 09:29:34.128 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:29:34 compute-0 nova_compute[189491]: 2025-12-01 09:29:34.713 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:29:34 compute-0 nova_compute[189491]: 2025-12-01 09:29:34.714 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:29:34 compute-0 nova_compute[189491]: 2025-12-01 09:29:34.714 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 09:29:34 compute-0 podman[245372]: 2025-12-01 09:29:34.73881153 +0000 UTC m=+0.096095842 container health_status f2d8639519b361376780abb83cb5a5e341c4f412720fb5852b2a4d261a69c359 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, distribution-scope=public, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, name=ubi9, release-0.7.12=, com.redhat.component=ubi9-container, container_name=kepler, vendor=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, maintainer=Red Hat, Inc.)
Dec  1 09:29:34 compute-0 podman[245371]: 2025-12-01 09:29:34.742346316 +0000 UTC m=+0.106696009 container health_status dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  1 09:29:37 compute-0 nova_compute[189491]: 2025-12-01 09:29:37.634 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:29:39 compute-0 nova_compute[189491]: 2025-12-01 09:29:39.132 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:29:39 compute-0 podman[245414]: 2025-12-01 09:29:39.728531227 +0000 UTC m=+0.091822559 container health_status f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec  1 09:29:39 compute-0 podman[245413]: 2025-12-01 09:29:39.738822636 +0000 UTC m=+0.093802307 container health_status 110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, vcs-type=git, maintainer=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, version=9.6, build-date=2025-08-20T13:12:41)
Dec  1 09:29:42 compute-0 nova_compute[189491]: 2025-12-01 09:29:42.637 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:29:43 compute-0 podman[245453]: 2025-12-01 09:29:43.77367087 +0000 UTC m=+0.124539569 container health_status 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  1 09:29:43 compute-0 podman[245454]: 2025-12-01 09:29:43.832207414 +0000 UTC m=+0.176809712 container health_status 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 09:29:44 compute-0 nova_compute[189491]: 2025-12-01 09:29:44.134 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:29:47 compute-0 nova_compute[189491]: 2025-12-01 09:29:47.640 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:29:49 compute-0 nova_compute[189491]: 2025-12-01 09:29:49.137 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:29:52 compute-0 nova_compute[189491]: 2025-12-01 09:29:52.645 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:29:54 compute-0 nova_compute[189491]: 2025-12-01 09:29:54.141 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:29:54 compute-0 podman[245497]: 2025-12-01 09:29:54.725150382 +0000 UTC m=+0.088909689 container health_status 6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  1 09:29:54 compute-0 podman[245498]: 2025-12-01 09:29:54.746227771 +0000 UTC m=+0.100792225 container health_status ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec  1 09:29:57 compute-0 nova_compute[189491]: 2025-12-01 09:29:57.648 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:29:59 compute-0 nova_compute[189491]: 2025-12-01 09:29:59.145 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:29:59 compute-0 podman[203700]: time="2025-12-01T09:29:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 09:29:59 compute-0 podman[203700]: @ - - [01/Dec/2025:09:29:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Dec  1 09:29:59 compute-0 podman[203700]: @ - - [01/Dec/2025:09:29:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4806 "" "Go-http-client/1.1"
Dec  1 09:30:01 compute-0 openstack_network_exporter[205866]: ERROR   09:30:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:30:01 compute-0 openstack_network_exporter[205866]: ERROR   09:30:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:30:01 compute-0 openstack_network_exporter[205866]: ERROR   09:30:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 09:30:01 compute-0 openstack_network_exporter[205866]: ERROR   09:30:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 09:30:01 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:30:01 compute-0 openstack_network_exporter[205866]: ERROR   09:30:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 09:30:01 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:30:01 compute-0 podman[245538]: 2025-12-01 09:30:01.711448212 +0000 UTC m=+0.087852412 container health_status e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3)
Dec  1 09:30:02 compute-0 nova_compute[189491]: 2025-12-01 09:30:02.652 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:30:04 compute-0 nova_compute[189491]: 2025-12-01 09:30:04.149 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:30:05 compute-0 podman[245557]: 2025-12-01 09:30:05.745941076 +0000 UTC m=+0.115607804 container health_status f2d8639519b361376780abb83cb5a5e341c4f412720fb5852b2a4d261a69c359 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, name=ubi9, release-0.7.12=, version=9.4, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.29.0, io.openshift.expose-services=, io.openshift.tags=base rhel9, io.k8s.display-name=Red Hat Universal Base Image 9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.component=ubi9-container, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64)
Dec  1 09:30:05 compute-0 podman[245556]: 2025-12-01 09:30:05.752172306 +0000 UTC m=+0.118746119 container health_status dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  1 09:30:07 compute-0 nova_compute[189491]: 2025-12-01 09:30:07.654 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:30:09 compute-0 nova_compute[189491]: 2025-12-01 09:30:09.152 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:30:10 compute-0 podman[245597]: 2025-12-01 09:30:10.743886856 +0000 UTC m=+0.111957156 container health_status 110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release=1755695350, vcs-type=git, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, architecture=x86_64)
Dec  1 09:30:10 compute-0 podman[245598]: 2025-12-01 09:30:10.758918029 +0000 UTC m=+0.128226787 container health_status f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec  1 09:30:12 compute-0 nova_compute[189491]: 2025-12-01 09:30:12.656 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:30:14 compute-0 nova_compute[189491]: 2025-12-01 09:30:14.155 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:30:14 compute-0 podman[245635]: 2025-12-01 09:30:14.748327492 +0000 UTC m=+0.109760982 container health_status 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team)
Dec  1 09:30:14 compute-0 podman[245636]: 2025-12-01 09:30:14.791680779 +0000 UTC m=+0.156023369 container health_status 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec  1 09:30:17 compute-0 nova_compute[189491]: 2025-12-01 09:30:17.658 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:30:19 compute-0 nova_compute[189491]: 2025-12-01 09:30:19.157 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:30:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:19.784 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  1 09:30:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:19.785 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  1 09:30:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:19.785 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b020>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff8501475f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:30:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:19.786 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7ff84c98b0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:30:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:19.787 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84dc55100>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff8501475f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:30:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:19.788 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff8501475f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:30:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:19.788 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff8501475f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:30:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:19.789 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff8501475f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:30:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:19.789 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84ca1c260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff8501475f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:30:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:19.789 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff8501475f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:30:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:19.790 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff8501475f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:30:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:19.790 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84ff01af0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff8501475f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:30:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:19.791 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bb30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff8501475f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:30:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:19.791 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff8501475f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:30:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:19.792 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bb60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff8501475f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:30:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:19.792 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff8501475f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:30:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:19.792 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bbc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff8501475f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:30:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:19.792 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff8501475f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:30:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:19.793 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff8501475f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:30:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:19.793 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff8501475f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:30:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:19.793 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff8501475f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:30:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:19.793 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b5f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff8501475f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:30:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:19.793 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff8501475f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:30:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:19.794 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98be60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff8501475f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:30:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:19.794 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84f216690>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff8501475f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:30:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:19.794 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c9896d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff8501475f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:30:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:19.794 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff8501475f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:30:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:19.795 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98a720>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff8501475f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:30:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:19.795 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff8501475f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:30:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:19.798 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '7ed22ffd-011d-48d7-962a-8606e471a59e', 'name': 'test_0', 'flavor': {'id': '719a52fe-7f4b-48c0-b9dc-6a91d4ec600c', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '304c689d-2799-45ae-8166-517d5fd107b2'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'fac95b8a995a4174bfa966a8d9d9aa01', 'user_id': '962a55152ff34fdda5eae1f8aee3a7a9', 'hostId': '8e7812466a0145cddd68754a1627db4b56b0a23f372c34432aca44f1', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  1 09:30:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:19.802 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '97dcaede-87ef-4c1c-a4a8-4ec9587cfe86', 'name': 'vn-a75cfa3-aohxquokylp7-2qxsn2rwux5j-vnf-gncrlbwrk3ge', 'flavor': {'id': '719a52fe-7f4b-48c0-b9dc-6a91d4ec600c', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '304c689d-2799-45ae-8166-517d5fd107b2'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'fac95b8a995a4174bfa966a8d9d9aa01', 'user_id': '962a55152ff34fdda5eae1f8aee3a7a9', 'hostId': '8e7812466a0145cddd68754a1627db4b56b0a23f372c34432aca44f1', 'status': 'active', 'metadata': {'metering.server_group': '1555a697-b0aa-4429-98e7-26e6671e228d'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  1 09:30:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:19.806 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '11a8e94c-61e3-4805-b688-e4b9121b30ba', 'name': 'vn-a75cfa3-6buvcyjxf2ua-hietjgfclklq-vnf-3mwygpaab4vh', 'flavor': {'id': '719a52fe-7f4b-48c0-b9dc-6a91d4ec600c', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '304c689d-2799-45ae-8166-517d5fd107b2'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'fac95b8a995a4174bfa966a8d9d9aa01', 'user_id': '962a55152ff34fdda5eae1f8aee3a7a9', 'hostId': '8e7812466a0145cddd68754a1627db4b56b0a23f372c34432aca44f1', 'status': 'active', 'metadata': {'metering.server_group': '1555a697-b0aa-4429-98e7-26e6671e228d'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  1 09:30:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:19.809 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '350d2bc4-8489-4a5a-991a-99e32671f962', 'name': 'vn-a75cfa3-5bcj5tw5woc6-eld5euc3zwia-vnf-qwzf3cpwxtqu', 'flavor': {'id': '719a52fe-7f4b-48c0-b9dc-6a91d4ec600c', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '304c689d-2799-45ae-8166-517d5fd107b2'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000003', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'fac95b8a995a4174bfa966a8d9d9aa01', 'user_id': '962a55152ff34fdda5eae1f8aee3a7a9', 'hostId': '8e7812466a0145cddd68754a1627db4b56b0a23f372c34432aca44f1', 'status': 'active', 'metadata': {'metering.server_group': '1555a697-b0aa-4429-98e7-26e6671e228d'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  1 09:30:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:19.810 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  1 09:30:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:19.810 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b020>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:30:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:19.810 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b020>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:30:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:19.810 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:30:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:19.811 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-01T09:30:19.810564) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:30:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:19.885 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:19.886 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:19.886 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:19.968 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:19.969 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:19.969 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.060 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.062 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.063 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.144 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.145 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.145 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.145 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.146 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7ff8501e1d00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.146 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.146 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84dc55100>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.146 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84dc55100>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.146 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.147 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-01T09:30:20.146441) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.169 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.allocation volume: 21831680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.169 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.169 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.195 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.196 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.196 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.222 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.allocation volume: 21831680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.222 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.223 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.254 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.allocation volume: 21635072 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.254 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.255 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.256 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.256 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7ff84c98b110>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.256 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.257 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b140>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.257 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b140>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.257 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.258 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.read.latency volume: 476643826 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.258 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.read.latency volume: 112985408 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.258 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.read.latency volume: 87581444 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.259 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.read.latency volume: 623315277 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.259 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.read.latency volume: 99798863 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.259 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.read.latency volume: 80231981 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.260 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.read.latency volume: 469977634 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.260 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.read.latency volume: 95101905 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.261 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.read.latency volume: 74341595 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.261 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.read.latency volume: 451180044 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.261 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.read.latency volume: 71893061 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.262 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.read.latency volume: 57010170 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.262 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.263 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7ff84c98b1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.263 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.263 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.263 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.263 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-01T09:30:20.257540) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.263 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.264 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.264 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-01T09:30:20.263866) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.264 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.264 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.265 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.265 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.265 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.266 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.266 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.266 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.267 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.267 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.267 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.268 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.268 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7ff84c98b200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.268 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.268 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.268 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.269 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.269 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.269 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-01T09:30:20.269045) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.269 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.270 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.270 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.write.bytes volume: 41840640 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.270 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.270 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.271 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.write.bytes volume: 41852928 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.271 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.271 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.271 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.write.bytes volume: 41783296 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.272 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.272 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.273 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.273 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7ff84ca1c230>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.273 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.273 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84ca1c260>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.273 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84ca1c260>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.273 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.274 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-01T09:30:20.273764) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.293 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.313 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.335 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.355 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.356 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.356 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7ff84c98b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.356 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.357 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.357 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.357 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.357 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.write.latency volume: 1809136387 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.357 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.write.latency volume: 11785635 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.357 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-01T09:30:20.357304) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.358 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.358 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.write.latency volume: 664336258 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.358 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.write.latency volume: 9391906 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.359 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.359 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.write.latency volume: 1291579094 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.359 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.write.latency volume: 13179146 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.359 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.360 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.write.latency volume: 1311172785 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.360 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.write.latency volume: 7508073 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.360 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.361 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.361 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7ff84c98b2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.361 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.361 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.361 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.361 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.362 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-01T09:30:20.361799) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.362 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.write.requests volume: 230 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.362 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.362 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.362 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.write.requests volume: 231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.363 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.363 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.363 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.write.requests volume: 243 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.364 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.364 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.364 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.write.requests volume: 229 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.364 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.365 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.365 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.365 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7ff84c98b620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.366 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.366 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84ff01af0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.366 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84ff01af0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.366 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.366 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-01T09:30:20.366314) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.369 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/network.incoming.bytes volume: 2136 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.374 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/network.incoming.bytes volume: 1486 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.377 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/network.incoming.bytes volume: 8364 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.381 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/network.incoming.bytes volume: 1570 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.381 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.382 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7ff84c98b680>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.382 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.382 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7ff84c98b320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.382 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.382 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.382 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.382 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.383 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.383 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7ff84c98b920>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.383 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.383 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bb60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.383 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bb60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.383 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.384 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/network.incoming.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.384 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-01T09:30:20.382521) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.384 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/network.incoming.packets volume: 12 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.384 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-01T09:30:20.383767) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.384 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/network.incoming.packets volume: 54 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.384 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/network.incoming.packets volume: 14 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.384 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.385 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7ff84c98b380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.385 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.385 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b3b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.385 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b3b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.385 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.386 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.386 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7ff84c98bb90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.386 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.386 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bbc0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.386 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bbc0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.386 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.386 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-01T09:30:20.385490) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.387 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.387 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-01T09:30:20.386749) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.387 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.387 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.387 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.387 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.388 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7ff84c98bbf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.388 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.388 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bc20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.388 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bc20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.388 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.388 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.388 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.389 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.389 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.389 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-01T09:30:20.388393) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.389 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.389 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7ff84c98bc80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.390 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.390 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bcb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.390 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bcb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.390 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.390 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/network.outgoing.bytes volume: 2342 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.390 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/network.outgoing.bytes volume: 2286 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.390 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/network.outgoing.bytes volume: 7572 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.391 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/network.outgoing.bytes volume: 2356 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.391 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.391 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7ff84c98bd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.391 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.391 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bd40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.391 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bd40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.392 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.392 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.392 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-01T09:30:20.390283) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.392 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.392 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-01T09:30:20.391968) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.392 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.393 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.393 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.393 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7ff84c98bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.393 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.393 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7ff84c98b5c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.394 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.394 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b5f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.394 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b5f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.394 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.394 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/memory.usage volume: 48.82421875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.394 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-01T09:30:20.394304) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.394 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/memory.usage volume: 49.0390625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.395 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/memory.usage volume: 48.9140625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.395 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/memory.usage volume: 48.92578125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.395 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.395 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7ff84dc55040>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.395 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.395 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b650>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.396 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b650>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.396 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.396 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.396 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.396 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.397 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.397 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-01T09:30:20.396117) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.397 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.397 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7ff84c98be30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.397 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.398 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98be60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.398 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98be60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.398 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.398 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.398 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/network.outgoing.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.398 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/network.outgoing.packets volume: 66 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.399 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.399 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.399 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7ff8503b1b80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.399 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.399 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84f216690>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.399 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84f216690>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.399 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-01T09:30:20.398231) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.400 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.400 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/cpu volume: 39070000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.400 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/cpu volume: 37050000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.400 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/cpu volume: 426120000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.400 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/cpu volume: 34900000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.401 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.401 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7ff84dab3f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.401 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.401 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c9896d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.401 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c9896d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.401 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.401 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-01T09:30:20.400058) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.402 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.402 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-01T09:30:20.401854) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.402 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.402 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.402 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.403 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.403 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.403 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.404 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.404 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.404 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.404 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.405 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.405 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.405 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7ff84c98bec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.405 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.406 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bef0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.406 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bef0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.406 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.406 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.406 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-01T09:30:20.406223) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.406 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.407 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.407 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.407 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.408 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7ff84c98b170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.408 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.408 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98a720>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.408 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98a720>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.408 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.408 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-01T09:30:20.408482) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.409 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.409 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.409 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.409 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.410 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.410 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.410 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.410 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.410 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.410 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.411 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.411 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.411 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.411 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7ff84c98bf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.412 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.412 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bf80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.412 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bf80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.412 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.412 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.412 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-01T09:30:20.412250) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.412 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.412 14 DEBUG ceilometer.compute.pollsters [-] 11a8e94c-61e3-4805-b688-e4b9121b30ba/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.413 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.413 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.413 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.413 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.413 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.414 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.414 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.414 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.414 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.414 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.414 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.414 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.414 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.414 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.414 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.414 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.414 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.414 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.414 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.414 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.415 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.415 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.415 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.415 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.415 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.415 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.415 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:30:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:30:20.415 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:30:22 compute-0 nova_compute[189491]: 2025-12-01 09:30:22.663 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:30:24 compute-0 nova_compute[189491]: 2025-12-01 09:30:24.161 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:30:25 compute-0 podman[245681]: 2025-12-01 09:30:25.721572644 +0000 UTC m=+0.085499336 container health_status 6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  1 09:30:25 compute-0 podman[245682]: 2025-12-01 09:30:25.730694374 +0000 UTC m=+0.095900797 container health_status ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251125)
Dec  1 09:30:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:30:26.517 106659 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:30:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:30:26.517 106659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:30:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:30:26.518 106659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:30:26 compute-0 nova_compute[189491]: 2025-12-01 09:30:26.714 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:30:26 compute-0 nova_compute[189491]: 2025-12-01 09:30:26.715 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 09:30:27 compute-0 nova_compute[189491]: 2025-12-01 09:30:27.665 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:30:27 compute-0 nova_compute[189491]: 2025-12-01 09:30:27.707 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "refresh_cache-97dcaede-87ef-4c1c-a4a8-4ec9587cfe86" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 09:30:27 compute-0 nova_compute[189491]: 2025-12-01 09:30:27.708 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquired lock "refresh_cache-97dcaede-87ef-4c1c-a4a8-4ec9587cfe86" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 09:30:27 compute-0 nova_compute[189491]: 2025-12-01 09:30:27.708 189495 DEBUG nova.network.neutron [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] [instance: 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  1 09:30:29 compute-0 nova_compute[189491]: 2025-12-01 09:30:29.166 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:30:29 compute-0 podman[203700]: time="2025-12-01T09:30:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 09:30:29 compute-0 podman[203700]: @ - - [01/Dec/2025:09:30:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Dec  1 09:30:29 compute-0 podman[203700]: @ - - [01/Dec/2025:09:30:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4814 "" "Go-http-client/1.1"
Dec  1 09:30:30 compute-0 nova_compute[189491]: 2025-12-01 09:30:30.886 189495 DEBUG nova.network.neutron [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] [instance: 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86] Updating instance_info_cache with network_info: [{"id": "609b09f2-6c63-41e7-9850-15c0098f35b4", "address": "fa:16:3e:40:39:1e", "network": {"id": "52d15875-2a2e-463a-bc5d-8fa6b8466bff", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fac95b8a995a4174bfa966a8d9d9aa01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap609b09f2-6c", "ovs_interfaceid": "609b09f2-6c63-41e7-9850-15c0098f35b4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 09:30:30 compute-0 nova_compute[189491]: 2025-12-01 09:30:30.913 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Releasing lock "refresh_cache-97dcaede-87ef-4c1c-a4a8-4ec9587cfe86" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 09:30:30 compute-0 nova_compute[189491]: 2025-12-01 09:30:30.914 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] [instance: 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  1 09:30:31 compute-0 openstack_network_exporter[205866]: ERROR   09:30:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 09:30:31 compute-0 openstack_network_exporter[205866]: ERROR   09:30:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:30:31 compute-0 openstack_network_exporter[205866]: ERROR   09:30:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:30:31 compute-0 openstack_network_exporter[205866]: ERROR   09:30:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 09:30:31 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:30:31 compute-0 openstack_network_exporter[205866]: ERROR   09:30:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 09:30:31 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:30:31 compute-0 nova_compute[189491]: 2025-12-01 09:30:31.713 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:30:32 compute-0 nova_compute[189491]: 2025-12-01 09:30:32.670 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:30:32 compute-0 nova_compute[189491]: 2025-12-01 09:30:32.713 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:30:32 compute-0 nova_compute[189491]: 2025-12-01 09:30:32.737 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:30:32 compute-0 nova_compute[189491]: 2025-12-01 09:30:32.737 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:30:32 compute-0 nova_compute[189491]: 2025-12-01 09:30:32.738 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:30:32 compute-0 nova_compute[189491]: 2025-12-01 09:30:32.738 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 09:30:32 compute-0 podman[245725]: 2025-12-01 09:30:32.740815919 +0000 UTC m=+0.102405374 container health_status e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  1 09:30:32 compute-0 nova_compute[189491]: 2025-12-01 09:30:32.843 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:30:32 compute-0 nova_compute[189491]: 2025-12-01 09:30:32.909 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:30:32 compute-0 nova_compute[189491]: 2025-12-01 09:30:32.910 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:30:33 compute-0 nova_compute[189491]: 2025-12-01 09:30:33.002 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:30:33 compute-0 nova_compute[189491]: 2025-12-01 09:30:33.004 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:30:33 compute-0 nova_compute[189491]: 2025-12-01 09:30:33.065 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:30:33 compute-0 nova_compute[189491]: 2025-12-01 09:30:33.066 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:30:33 compute-0 nova_compute[189491]: 2025-12-01 09:30:33.157 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk.eph0 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:30:33 compute-0 nova_compute[189491]: 2025-12-01 09:30:33.164 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:30:33 compute-0 nova_compute[189491]: 2025-12-01 09:30:33.243 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:30:33 compute-0 nova_compute[189491]: 2025-12-01 09:30:33.245 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:30:33 compute-0 nova_compute[189491]: 2025-12-01 09:30:33.306 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:30:33 compute-0 nova_compute[189491]: 2025-12-01 09:30:33.309 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:30:33 compute-0 nova_compute[189491]: 2025-12-01 09:30:33.386 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.eph0 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:30:33 compute-0 nova_compute[189491]: 2025-12-01 09:30:33.388 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:30:33 compute-0 nova_compute[189491]: 2025-12-01 09:30:33.453 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.eph0 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:30:33 compute-0 nova_compute[189491]: 2025-12-01 09:30:33.462 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:30:33 compute-0 nova_compute[189491]: 2025-12-01 09:30:33.530 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:30:33 compute-0 nova_compute[189491]: 2025-12-01 09:30:33.531 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:30:33 compute-0 nova_compute[189491]: 2025-12-01 09:30:33.603 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:30:33 compute-0 nova_compute[189491]: 2025-12-01 09:30:33.605 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:30:33 compute-0 nova_compute[189491]: 2025-12-01 09:30:33.684 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.eph0 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:30:33 compute-0 nova_compute[189491]: 2025-12-01 09:30:33.686 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:30:33 compute-0 nova_compute[189491]: 2025-12-01 09:30:33.745 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.eph0 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:30:33 compute-0 nova_compute[189491]: 2025-12-01 09:30:33.755 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/350d2bc4-8489-4a5a-991a-99e32671f962/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:30:33 compute-0 nova_compute[189491]: 2025-12-01 09:30:33.843 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/350d2bc4-8489-4a5a-991a-99e32671f962/disk --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:30:33 compute-0 nova_compute[189491]: 2025-12-01 09:30:33.845 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/350d2bc4-8489-4a5a-991a-99e32671f962/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:30:33 compute-0 nova_compute[189491]: 2025-12-01 09:30:33.917 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/350d2bc4-8489-4a5a-991a-99e32671f962/disk --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:30:33 compute-0 nova_compute[189491]: 2025-12-01 09:30:33.920 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/350d2bc4-8489-4a5a-991a-99e32671f962/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:30:34 compute-0 nova_compute[189491]: 2025-12-01 09:30:34.019 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/350d2bc4-8489-4a5a-991a-99e32671f962/disk.eph0 --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:30:34 compute-0 nova_compute[189491]: 2025-12-01 09:30:34.022 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/350d2bc4-8489-4a5a-991a-99e32671f962/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:30:34 compute-0 nova_compute[189491]: 2025-12-01 09:30:34.109 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/350d2bc4-8489-4a5a-991a-99e32671f962/disk.eph0 --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:30:34 compute-0 nova_compute[189491]: 2025-12-01 09:30:34.169 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:30:34 compute-0 nova_compute[189491]: 2025-12-01 09:30:34.573 189495 WARNING nova.virt.libvirt.driver [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 09:30:34 compute-0 nova_compute[189491]: 2025-12-01 09:30:34.574 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4573MB free_disk=72.31806945800781GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 09:30:34 compute-0 nova_compute[189491]: 2025-12-01 09:30:34.574 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:30:34 compute-0 nova_compute[189491]: 2025-12-01 09:30:34.574 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:30:34 compute-0 nova_compute[189491]: 2025-12-01 09:30:34.651 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Instance 7ed22ffd-011d-48d7-962a-8606e471a59e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 09:30:34 compute-0 nova_compute[189491]: 2025-12-01 09:30:34.652 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Instance 11a8e94c-61e3-4805-b688-e4b9121b30ba actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 09:30:34 compute-0 nova_compute[189491]: 2025-12-01 09:30:34.652 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Instance 350d2bc4-8489-4a5a-991a-99e32671f962 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 09:30:34 compute-0 nova_compute[189491]: 2025-12-01 09:30:34.653 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Instance 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 09:30:34 compute-0 nova_compute[189491]: 2025-12-01 09:30:34.653 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 09:30:34 compute-0 nova_compute[189491]: 2025-12-01 09:30:34.653 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=2560MB phys_disk=79GB used_disk=8GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 09:30:34 compute-0 nova_compute[189491]: 2025-12-01 09:30:34.737 189495 DEBUG nova.compute.provider_tree [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Inventory has not changed in ProviderTree for provider: 143c7fe7-af1f-477a-978c-6a994d785d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 09:30:34 compute-0 nova_compute[189491]: 2025-12-01 09:30:34.750 189495 DEBUG nova.scheduler.client.report [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Inventory has not changed for provider 143c7fe7-af1f-477a-978c-6a994d785d98 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 09:30:34 compute-0 nova_compute[189491]: 2025-12-01 09:30:34.751 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 09:30:34 compute-0 nova_compute[189491]: 2025-12-01 09:30:34.752 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.177s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:30:35 compute-0 nova_compute[189491]: 2025-12-01 09:30:35.753 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:30:35 compute-0 nova_compute[189491]: 2025-12-01 09:30:35.754 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:30:35 compute-0 nova_compute[189491]: 2025-12-01 09:30:35.754 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:30:35 compute-0 nova_compute[189491]: 2025-12-01 09:30:35.755 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:30:36 compute-0 podman[245794]: 2025-12-01 09:30:36.707085658 +0000 UTC m=+0.076683203 container health_status dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  1 09:30:36 compute-0 nova_compute[189491]: 2025-12-01 09:30:36.713 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:30:36 compute-0 nova_compute[189491]: 2025-12-01 09:30:36.714 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:30:36 compute-0 nova_compute[189491]: 2025-12-01 09:30:36.714 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 09:30:36 compute-0 podman[245795]: 2025-12-01 09:30:36.73701331 +0000 UTC m=+0.099245858 container health_status f2d8639519b361376780abb83cb5a5e341c4f412720fb5852b2a4d261a69c359 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, container_name=kepler, release-0.7.12=, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, name=ubi9, io.openshift.expose-services=, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, release=1214.1726694543, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-type=git)
Dec  1 09:30:37 compute-0 nova_compute[189491]: 2025-12-01 09:30:37.670 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:30:39 compute-0 nova_compute[189491]: 2025-12-01 09:30:39.177 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:30:41 compute-0 podman[245836]: 2025-12-01 09:30:41.741123255 +0000 UTC m=+0.101735168 container health_status f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec  1 09:30:41 compute-0 podman[245835]: 2025-12-01 09:30:41.741086925 +0000 UTC m=+0.094785941 container health_status 110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, architecture=x86_64, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, release=1755695350, managed_by=edpm_ansible, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Dec  1 09:30:42 compute-0 nova_compute[189491]: 2025-12-01 09:30:42.673 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:30:44 compute-0 nova_compute[189491]: 2025-12-01 09:30:44.179 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:30:45 compute-0 podman[245869]: 2025-12-01 09:30:45.799312064 +0000 UTC m=+0.154914572 container health_status 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  1 09:30:45 compute-0 podman[245870]: 2025-12-01 09:30:45.816079639 +0000 UTC m=+0.157633608 container health_status 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec  1 09:30:47 compute-0 nova_compute[189491]: 2025-12-01 09:30:47.677 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:30:49 compute-0 nova_compute[189491]: 2025-12-01 09:30:49.183 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:30:52 compute-0 nova_compute[189491]: 2025-12-01 09:30:52.679 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:30:54 compute-0 nova_compute[189491]: 2025-12-01 09:30:54.185 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:30:56 compute-0 podman[245912]: 2025-12-01 09:30:56.70240319 +0000 UTC m=+0.061690471 container health_status 6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  1 09:30:56 compute-0 podman[245913]: 2025-12-01 09:30:56.712853592 +0000 UTC m=+0.070606926 container health_status ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.license=GPLv2)
Dec  1 09:30:57 compute-0 nova_compute[189491]: 2025-12-01 09:30:57.682 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:30:59 compute-0 nova_compute[189491]: 2025-12-01 09:30:59.188 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:30:59 compute-0 podman[203700]: time="2025-12-01T09:30:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 09:30:59 compute-0 podman[203700]: @ - - [01/Dec/2025:09:30:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Dec  1 09:30:59 compute-0 podman[203700]: @ - - [01/Dec/2025:09:30:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4803 "" "Go-http-client/1.1"
Dec  1 09:31:01 compute-0 openstack_network_exporter[205866]: ERROR   09:31:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:31:01 compute-0 openstack_network_exporter[205866]: ERROR   09:31:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:31:01 compute-0 openstack_network_exporter[205866]: ERROR   09:31:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 09:31:01 compute-0 openstack_network_exporter[205866]: ERROR   09:31:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 09:31:01 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:31:01 compute-0 openstack_network_exporter[205866]: ERROR   09:31:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 09:31:01 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:31:02 compute-0 nova_compute[189491]: 2025-12-01 09:31:02.686 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:31:03 compute-0 podman[245953]: 2025-12-01 09:31:03.748238895 +0000 UTC m=+0.110987391 container health_status e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec  1 09:31:04 compute-0 nova_compute[189491]: 2025-12-01 09:31:04.192 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:31:07 compute-0 nova_compute[189491]: 2025-12-01 09:31:07.687 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:31:07 compute-0 podman[245972]: 2025-12-01 09:31:07.738443909 +0000 UTC m=+0.100516659 container health_status dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  1 09:31:07 compute-0 podman[245973]: 2025-12-01 09:31:07.744197898 +0000 UTC m=+0.100868337 container health_status f2d8639519b361376780abb83cb5a5e341c4f412720fb5852b2a4d261a69c359 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.tags=base rhel9, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, release=1214.1726694543, build-date=2024-09-18T21:23:30, version=9.4, maintainer=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, managed_by=edpm_ansible)
Dec  1 09:31:09 compute-0 nova_compute[189491]: 2025-12-01 09:31:09.197 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:31:12 compute-0 nova_compute[189491]: 2025-12-01 09:31:12.690 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:31:12 compute-0 podman[246015]: 2025-12-01 09:31:12.69846457 +0000 UTC m=+0.060319942 container health_status f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec  1 09:31:12 compute-0 podman[246014]: 2025-12-01 09:31:12.734570315 +0000 UTC m=+0.100502076 container health_status 110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, maintainer=Red Hat, Inc., config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, vcs-type=git, managed_by=edpm_ansible, name=ubi9-minimal, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec  1 09:31:14 compute-0 nova_compute[189491]: 2025-12-01 09:31:14.199 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:31:16 compute-0 podman[246051]: 2025-12-01 09:31:16.694242188 +0000 UTC m=+0.070133330 container health_status 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec  1 09:31:16 compute-0 podman[246052]: 2025-12-01 09:31:16.719439149 +0000 UTC m=+0.095590347 container health_status 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec  1 09:31:17 compute-0 nova_compute[189491]: 2025-12-01 09:31:17.693 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:31:19 compute-0 nova_compute[189491]: 2025-12-01 09:31:19.202 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:31:22 compute-0 nova_compute[189491]: 2025-12-01 09:31:22.695 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:31:24 compute-0 nova_compute[189491]: 2025-12-01 09:31:24.205 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:31:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:31:26.517 106659 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:31:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:31:26.518 106659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:31:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:31:26.518 106659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:31:27 compute-0 nova_compute[189491]: 2025-12-01 09:31:27.698 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:31:27 compute-0 nova_compute[189491]: 2025-12-01 09:31:27.714 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:31:27 compute-0 nova_compute[189491]: 2025-12-01 09:31:27.714 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 09:31:27 compute-0 nova_compute[189491]: 2025-12-01 09:31:27.715 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 09:31:27 compute-0 podman[246099]: 2025-12-01 09:31:27.758145093 +0000 UTC m=+0.109499403 container health_status ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image)
Dec  1 09:31:27 compute-0 podman[246098]: 2025-12-01 09:31:27.772295666 +0000 UTC m=+0.119243000 container health_status 6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  1 09:31:28 compute-0 nova_compute[189491]: 2025-12-01 09:31:28.725 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "refresh_cache-7ed22ffd-011d-48d7-962a-8606e471a59e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 09:31:28 compute-0 nova_compute[189491]: 2025-12-01 09:31:28.726 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquired lock "refresh_cache-7ed22ffd-011d-48d7-962a-8606e471a59e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 09:31:28 compute-0 nova_compute[189491]: 2025-12-01 09:31:28.726 189495 DEBUG nova.network.neutron [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] [instance: 7ed22ffd-011d-48d7-962a-8606e471a59e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  1 09:31:28 compute-0 nova_compute[189491]: 2025-12-01 09:31:28.727 189495 DEBUG nova.objects.instance [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 7ed22ffd-011d-48d7-962a-8606e471a59e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 09:31:29 compute-0 nova_compute[189491]: 2025-12-01 09:31:29.210 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:31:29 compute-0 podman[203700]: time="2025-12-01T09:31:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 09:31:29 compute-0 podman[203700]: @ - - [01/Dec/2025:09:31:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Dec  1 09:31:29 compute-0 podman[203700]: @ - - [01/Dec/2025:09:31:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4810 "" "Go-http-client/1.1"
Dec  1 09:31:30 compute-0 nova_compute[189491]: 2025-12-01 09:31:30.737 189495 DEBUG nova.network.neutron [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] [instance: 7ed22ffd-011d-48d7-962a-8606e471a59e] Updating instance_info_cache with network_info: [{"id": "1632735e-15c5-4d6b-a450-baa001b88ac2", "address": "fa:16:3e:d4:bd:b4", "network": {"id": "52d15875-2a2e-463a-bc5d-8fa6b8466bff", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.55", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fac95b8a995a4174bfa966a8d9d9aa01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1632735e-15", "ovs_interfaceid": "1632735e-15c5-4d6b-a450-baa001b88ac2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 09:31:30 compute-0 nova_compute[189491]: 2025-12-01 09:31:30.757 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Releasing lock "refresh_cache-7ed22ffd-011d-48d7-962a-8606e471a59e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 09:31:30 compute-0 nova_compute[189491]: 2025-12-01 09:31:30.757 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] [instance: 7ed22ffd-011d-48d7-962a-8606e471a59e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  1 09:31:31 compute-0 openstack_network_exporter[205866]: ERROR   09:31:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:31:31 compute-0 openstack_network_exporter[205866]: ERROR   09:31:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 09:31:31 compute-0 openstack_network_exporter[205866]: ERROR   09:31:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:31:31 compute-0 openstack_network_exporter[205866]: ERROR   09:31:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 09:31:31 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:31:31 compute-0 openstack_network_exporter[205866]: ERROR   09:31:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 09:31:31 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:31:32 compute-0 nova_compute[189491]: 2025-12-01 09:31:32.700 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:31:33 compute-0 nova_compute[189491]: 2025-12-01 09:31:33.714 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:31:33 compute-0 nova_compute[189491]: 2025-12-01 09:31:33.715 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:31:33 compute-0 nova_compute[189491]: 2025-12-01 09:31:33.715 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:31:33 compute-0 nova_compute[189491]: 2025-12-01 09:31:33.954 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:31:33 compute-0 nova_compute[189491]: 2025-12-01 09:31:33.955 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:31:33 compute-0 nova_compute[189491]: 2025-12-01 09:31:33.956 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:31:33 compute-0 nova_compute[189491]: 2025-12-01 09:31:33.956 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 09:31:34 compute-0 nova_compute[189491]: 2025-12-01 09:31:34.194 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:31:34 compute-0 nova_compute[189491]: 2025-12-01 09:31:34.212 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:31:34 compute-0 nova_compute[189491]: 2025-12-01 09:31:34.267 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:31:34 compute-0 nova_compute[189491]: 2025-12-01 09:31:34.268 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:31:34 compute-0 nova_compute[189491]: 2025-12-01 09:31:34.335 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:31:34 compute-0 nova_compute[189491]: 2025-12-01 09:31:34.337 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:31:34 compute-0 nova_compute[189491]: 2025-12-01 09:31:34.413 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk.eph0 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:31:34 compute-0 nova_compute[189491]: 2025-12-01 09:31:34.414 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:31:34 compute-0 nova_compute[189491]: 2025-12-01 09:31:34.482 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk.eph0 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:31:34 compute-0 nova_compute[189491]: 2025-12-01 09:31:34.491 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:31:34 compute-0 nova_compute[189491]: 2025-12-01 09:31:34.550 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:31:34 compute-0 nova_compute[189491]: 2025-12-01 09:31:34.551 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:31:34 compute-0 nova_compute[189491]: 2025-12-01 09:31:34.615 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:31:34 compute-0 nova_compute[189491]: 2025-12-01 09:31:34.616 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:31:34 compute-0 nova_compute[189491]: 2025-12-01 09:31:34.675 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.eph0 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:31:34 compute-0 nova_compute[189491]: 2025-12-01 09:31:34.677 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:31:34 compute-0 podman[246156]: 2025-12-01 09:31:34.68760468 +0000 UTC m=+0.066713727 container health_status e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=edpm)
Dec  1 09:31:34 compute-0 nova_compute[189491]: 2025-12-01 09:31:34.745 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.eph0 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:31:34 compute-0 nova_compute[189491]: 2025-12-01 09:31:34.753 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:31:34 compute-0 nova_compute[189491]: 2025-12-01 09:31:34.829 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:31:34 compute-0 nova_compute[189491]: 2025-12-01 09:31:34.830 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:31:34 compute-0 nova_compute[189491]: 2025-12-01 09:31:34.889 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:31:34 compute-0 nova_compute[189491]: 2025-12-01 09:31:34.890 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:31:34 compute-0 nova_compute[189491]: 2025-12-01 09:31:34.948 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.eph0 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:31:34 compute-0 nova_compute[189491]: 2025-12-01 09:31:34.949 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:31:35 compute-0 nova_compute[189491]: 2025-12-01 09:31:35.012 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba/disk.eph0 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:31:35 compute-0 nova_compute[189491]: 2025-12-01 09:31:35.020 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/350d2bc4-8489-4a5a-991a-99e32671f962/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:31:35 compute-0 nova_compute[189491]: 2025-12-01 09:31:35.097 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/350d2bc4-8489-4a5a-991a-99e32671f962/disk --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:31:35 compute-0 nova_compute[189491]: 2025-12-01 09:31:35.100 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/350d2bc4-8489-4a5a-991a-99e32671f962/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:31:35 compute-0 nova_compute[189491]: 2025-12-01 09:31:35.160 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/350d2bc4-8489-4a5a-991a-99e32671f962/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:31:35 compute-0 nova_compute[189491]: 2025-12-01 09:31:35.162 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/350d2bc4-8489-4a5a-991a-99e32671f962/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:31:35 compute-0 nova_compute[189491]: 2025-12-01 09:31:35.240 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/350d2bc4-8489-4a5a-991a-99e32671f962/disk.eph0 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:31:35 compute-0 nova_compute[189491]: 2025-12-01 09:31:35.241 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/350d2bc4-8489-4a5a-991a-99e32671f962/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:31:35 compute-0 nova_compute[189491]: 2025-12-01 09:31:35.299 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/350d2bc4-8489-4a5a-991a-99e32671f962/disk.eph0 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:31:35 compute-0 nova_compute[189491]: 2025-12-01 09:31:35.679 189495 WARNING nova.virt.libvirt.driver [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 09:31:35 compute-0 nova_compute[189491]: 2025-12-01 09:31:35.680 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4565MB free_disk=72.31821060180664GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 09:31:35 compute-0 nova_compute[189491]: 2025-12-01 09:31:35.680 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:31:35 compute-0 nova_compute[189491]: 2025-12-01 09:31:35.681 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:31:36 compute-0 nova_compute[189491]: 2025-12-01 09:31:36.107 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Instance 7ed22ffd-011d-48d7-962a-8606e471a59e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 09:31:36 compute-0 nova_compute[189491]: 2025-12-01 09:31:36.108 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Instance 11a8e94c-61e3-4805-b688-e4b9121b30ba actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 09:31:36 compute-0 nova_compute[189491]: 2025-12-01 09:31:36.108 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Instance 350d2bc4-8489-4a5a-991a-99e32671f962 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 09:31:36 compute-0 nova_compute[189491]: 2025-12-01 09:31:36.109 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Instance 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 09:31:36 compute-0 nova_compute[189491]: 2025-12-01 09:31:36.109 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 09:31:36 compute-0 nova_compute[189491]: 2025-12-01 09:31:36.110 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=2560MB phys_disk=79GB used_disk=8GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 09:31:36 compute-0 nova_compute[189491]: 2025-12-01 09:31:36.231 189495 DEBUG nova.compute.provider_tree [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Inventory has not changed in ProviderTree for provider: 143c7fe7-af1f-477a-978c-6a994d785d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 09:31:36 compute-0 nova_compute[189491]: 2025-12-01 09:31:36.279 189495 DEBUG nova.scheduler.client.report [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Inventory has not changed for provider 143c7fe7-af1f-477a-978c-6a994d785d98 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 09:31:36 compute-0 nova_compute[189491]: 2025-12-01 09:31:36.281 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 09:31:36 compute-0 nova_compute[189491]: 2025-12-01 09:31:36.282 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.601s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:31:37 compute-0 nova_compute[189491]: 2025-12-01 09:31:37.278 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:31:37 compute-0 nova_compute[189491]: 2025-12-01 09:31:37.278 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:31:37 compute-0 nova_compute[189491]: 2025-12-01 09:31:37.302 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:31:37 compute-0 nova_compute[189491]: 2025-12-01 09:31:37.302 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:31:37 compute-0 nova_compute[189491]: 2025-12-01 09:31:37.302 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:31:37 compute-0 nova_compute[189491]: 2025-12-01 09:31:37.302 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 09:31:37 compute-0 nova_compute[189491]: 2025-12-01 09:31:37.703 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:31:38 compute-0 podman[246208]: 2025-12-01 09:31:38.701535256 +0000 UTC m=+0.074688110 container health_status dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  1 09:31:38 compute-0 podman[246209]: 2025-12-01 09:31:38.703564274 +0000 UTC m=+0.077017677 container health_status f2d8639519b361376780abb83cb5a5e341c4f412720fb5852b2a4d261a69c359 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, com.redhat.component=ubi9-container, build-date=2024-09-18T21:23:30, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, release=1214.1726694543, vendor=Red Hat, Inc., distribution-scope=public, maintainer=Red Hat, Inc., io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, version=9.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, config_id=edpm, name=ubi9, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Dec  1 09:31:38 compute-0 nova_compute[189491]: 2025-12-01 09:31:38.714 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:31:39 compute-0 nova_compute[189491]: 2025-12-01 09:31:39.216 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:31:42 compute-0 nova_compute[189491]: 2025-12-01 09:31:42.705 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:31:43 compute-0 podman[246251]: 2025-12-01 09:31:43.711467439 +0000 UTC m=+0.067426674 container health_status f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent)
Dec  1 09:31:43 compute-0 podman[246250]: 2025-12-01 09:31:43.744214003 +0000 UTC m=+0.097722478 container health_status 110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, distribution-scope=public, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, vcs-type=git, architecture=x86_64, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Dec  1 09:31:44 compute-0 nova_compute[189491]: 2025-12-01 09:31:44.219 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:31:47 compute-0 nova_compute[189491]: 2025-12-01 09:31:47.707 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:31:47 compute-0 podman[246288]: 2025-12-01 09:31:47.747516745 +0000 UTC m=+0.099916722 container health_status 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec  1 09:31:47 compute-0 podman[246289]: 2025-12-01 09:31:47.794069543 +0000 UTC m=+0.136554750 container health_status 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec  1 09:31:49 compute-0 nova_compute[189491]: 2025-12-01 09:31:49.222 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:31:51 compute-0 nova_compute[189491]: 2025-12-01 09:31:51.292 189495 DEBUG oslo_concurrency.lockutils [None req-d2be117b-df19-4226-ae02-2a483045bb6b 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Acquiring lock "11a8e94c-61e3-4805-b688-e4b9121b30ba" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:31:51 compute-0 nova_compute[189491]: 2025-12-01 09:31:51.292 189495 DEBUG oslo_concurrency.lockutils [None req-d2be117b-df19-4226-ae02-2a483045bb6b 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lock "11a8e94c-61e3-4805-b688-e4b9121b30ba" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:31:51 compute-0 nova_compute[189491]: 2025-12-01 09:31:51.293 189495 DEBUG oslo_concurrency.lockutils [None req-d2be117b-df19-4226-ae02-2a483045bb6b 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Acquiring lock "11a8e94c-61e3-4805-b688-e4b9121b30ba-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:31:51 compute-0 nova_compute[189491]: 2025-12-01 09:31:51.293 189495 DEBUG oslo_concurrency.lockutils [None req-d2be117b-df19-4226-ae02-2a483045bb6b 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lock "11a8e94c-61e3-4805-b688-e4b9121b30ba-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:31:51 compute-0 nova_compute[189491]: 2025-12-01 09:31:51.293 189495 DEBUG oslo_concurrency.lockutils [None req-d2be117b-df19-4226-ae02-2a483045bb6b 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lock "11a8e94c-61e3-4805-b688-e4b9121b30ba-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:31:51 compute-0 nova_compute[189491]: 2025-12-01 09:31:51.295 189495 INFO nova.compute.manager [None req-d2be117b-df19-4226-ae02-2a483045bb6b 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 11a8e94c-61e3-4805-b688-e4b9121b30ba] Terminating instance#033[00m
Dec  1 09:31:51 compute-0 nova_compute[189491]: 2025-12-01 09:31:51.296 189495 DEBUG nova.compute.manager [None req-d2be117b-df19-4226-ae02-2a483045bb6b 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 11a8e94c-61e3-4805-b688-e4b9121b30ba] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  1 09:31:51 compute-0 kernel: tap213d57d5-9e (unregistering): left promiscuous mode
Dec  1 09:31:51 compute-0 NetworkManager[56318]: <info>  [1764581511.3412] device (tap213d57d5-9e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  1 09:31:51 compute-0 nova_compute[189491]: 2025-12-01 09:31:51.363 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:31:51 compute-0 ovn_controller[97794]: 2025-12-01T09:31:51Z|00050|binding|INFO|Releasing lport 213d57d5-9e28-4606-937a-97375a401f82 from this chassis (sb_readonly=0)
Dec  1 09:31:51 compute-0 ovn_controller[97794]: 2025-12-01T09:31:51Z|00051|binding|INFO|Setting lport 213d57d5-9e28-4606-937a-97375a401f82 down in Southbound
Dec  1 09:31:51 compute-0 ovn_controller[97794]: 2025-12-01T09:31:51Z|00052|binding|INFO|Removing iface tap213d57d5-9e ovn-installed in OVS
Dec  1 09:31:51 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:31:51.381 106659 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:03:b9:7c 192.168.0.178'], port_security=['fa:16:3e:03:b9:7c 192.168.0.178'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-vdfkxa75cfa3-6buvcyjxf2ua-hietjgfclklq-port-cj54npjlvy2j', 'neutron:cidrs': '192.168.0.178/24', 'neutron:device_id': '11a8e94c-61e3-4805-b688-e4b9121b30ba', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-52d15875-2a2e-463a-bc5d-8fa6b8466bff', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-vdfkxa75cfa3-6buvcyjxf2ua-hietjgfclklq-port-cj54npjlvy2j', 'neutron:project_id': 'fac95b8a995a4174bfa966a8d9d9aa01', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a5a5e6d4-6211-447f-b3f6-e2120ff69d87', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.238', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=260b7b6c-4405-41e2-9dc8-1595161adaf8, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff7f12c3670>], logical_port=213d57d5-9e28-4606-937a-97375a401f82) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff7f12c3670>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 09:31:51 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:31:51.382 106659 INFO neutron.agent.ovn.metadata.agent [-] Port 213d57d5-9e28-4606-937a-97375a401f82 in datapath 52d15875-2a2e-463a-bc5d-8fa6b8466bff unbound from our chassis#033[00m
Dec  1 09:31:51 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:31:51.383 106659 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 52d15875-2a2e-463a-bc5d-8fa6b8466bff#033[00m
Dec  1 09:31:51 compute-0 nova_compute[189491]: 2025-12-01 09:31:51.388 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:31:51 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Deactivated successfully.
Dec  1 09:31:51 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:31:51.403 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[98904d77-a46e-40e9-80de-b02dab508842]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:31:51 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Consumed 8min 48.785s CPU time.
Dec  1 09:31:51 compute-0 systemd-machined[155812]: Machine qemu-2-instance-00000002 terminated.
Dec  1 09:31:51 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:31:51.437 239843 DEBUG oslo.privsep.daemon [-] privsep: reply[71878ca4-04b0-44bf-bc44-b376c0691d40]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:31:51 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:31:51.440 239843 DEBUG oslo.privsep.daemon [-] privsep: reply[19efd5ac-49cc-4164-b73d-b523284ff353]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:31:51 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:31:51.464 239843 DEBUG oslo.privsep.daemon [-] privsep: reply[0989b647-7e31-40a1-9006-6756c5a95ef2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:31:51 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:31:51.482 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[bc4b6af4-720c-458b-baf1-ace246060450]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap52d15875-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d0:8c:a9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 616, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 616, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 383928, 'reachable_time': 38841, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 246345, 'error': None, 'target': 'ovnmeta-52d15875-2a2e-463a-bc5d-8fa6b8466bff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:31:51 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:31:51.499 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[5aa1109e-9e89-4027-99f3-1bfcf35d4f59]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap52d15875-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 383943, 'tstamp': 383943}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 246346, 'error': None, 'target': 'ovnmeta-52d15875-2a2e-463a-bc5d-8fa6b8466bff', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap52d15875-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 383945, 'tstamp': 383945}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 246346, 'error': None, 'target': 'ovnmeta-52d15875-2a2e-463a-bc5d-8fa6b8466bff', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:31:51 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:31:51.501 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap52d15875-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:31:51 compute-0 nova_compute[189491]: 2025-12-01 09:31:51.504 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:31:51 compute-0 nova_compute[189491]: 2025-12-01 09:31:51.511 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:31:51 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:31:51.512 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap52d15875-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:31:51 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:31:51.512 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 09:31:51 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:31:51.513 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap52d15875-20, col_values=(('external_ids', {'iface-id': 'dbcd2eb8-9722-4ebb-b254-d57f599617d1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:31:51 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:31:51.514 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 09:31:51 compute-0 nova_compute[189491]: 2025-12-01 09:31:51.521 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:31:51 compute-0 nova_compute[189491]: 2025-12-01 09:31:51.528 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:31:51 compute-0 nova_compute[189491]: 2025-12-01 09:31:51.580 189495 INFO nova.virt.libvirt.driver [-] [instance: 11a8e94c-61e3-4805-b688-e4b9121b30ba] Instance destroyed successfully.#033[00m
Dec  1 09:31:51 compute-0 nova_compute[189491]: 2025-12-01 09:31:51.580 189495 DEBUG nova.objects.instance [None req-d2be117b-df19-4226-ae02-2a483045bb6b 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lazy-loading 'resources' on Instance uuid 11a8e94c-61e3-4805-b688-e4b9121b30ba obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 09:31:51 compute-0 nova_compute[189491]: 2025-12-01 09:31:51.596 189495 DEBUG nova.virt.libvirt.vif [None req-d2be117b-df19-4226-ae02-2a483045bb6b 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-01T09:17:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='vn-a75cfa3-6buvcyjxf2ua-hietjgfclklq-vnf-3mwygpaab4vh',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-a75cfa3-6buvcyjxf2ua-hietjgfclklq-vnf-3mwygpaab4vh',id=2,image_ref='304c689d-2799-45ae-8166-517d5fd107b2',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-01T09:17:55Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='1555a697-b0aa-4429-98e7-26e6671e228d'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='fac95b8a995a4174bfa966a8d9d9aa01',ramdisk_id='',reservation_id='r-7mhbbi8t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member',image_base_image_ref='304c689d-2799-45ae-8166-517d5fd107b2',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-01T09:17:55Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT01MzYyNjc3MjU0NzcxMTg0OTcyPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTUzNjI2NzcyNTQ3NzExODQ5NzI9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09NTM2MjY3NzI1NDc3MTE4NDk3Mj09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTUzNjI2NzcyNTQ3NzExODQ5NzI9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT01MzYyNjc3MjU0NzcxMTg0OTcyPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT01MzYyNjc3MjU0NzcxMTg0OTcyPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKC
Dec  1 09:31:51 compute-0 nova_compute[189491]: Cclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09NTM2MjY3NzI1NDc3MTE4NDk3Mj09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTUzNjI2NzcyNTQ3NzExODQ5NzI9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT01MzYyNjc3MjU0NzcxMTg0OTcyPT0tLQo=',user_id='962a55152ff34fdda5eae1f8aee3a7a9',uuid=11a8e94c-61e3-4805-b688-e4b9121b30ba,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "213d57d5-9e28-4606-937a-97375a401f82", "address": "fa:16:3e:03:b9:7c", "network": {"id": "52d15875-2a2e-463a-bc5d-8fa6b8466bff", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.178", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fac95b8a995a4174bfa966a8d9d9aa01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap213d57d5-9e", "ovs_interfaceid": "213d57d5-9e28-4606-937a-97375a401f82", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  1 09:31:51 compute-0 nova_compute[189491]: 2025-12-01 09:31:51.597 189495 DEBUG nova.network.os_vif_util [None req-d2be117b-df19-4226-ae02-2a483045bb6b 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Converting VIF {"id": "213d57d5-9e28-4606-937a-97375a401f82", "address": "fa:16:3e:03:b9:7c", "network": {"id": "52d15875-2a2e-463a-bc5d-8fa6b8466bff", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.178", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fac95b8a995a4174bfa966a8d9d9aa01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap213d57d5-9e", "ovs_interfaceid": "213d57d5-9e28-4606-937a-97375a401f82", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 09:31:51 compute-0 nova_compute[189491]: 2025-12-01 09:31:51.598 189495 DEBUG nova.network.os_vif_util [None req-d2be117b-df19-4226-ae02-2a483045bb6b 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:03:b9:7c,bridge_name='br-int',has_traffic_filtering=True,id=213d57d5-9e28-4606-937a-97375a401f82,network=Network(52d15875-2a2e-463a-bc5d-8fa6b8466bff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap213d57d5-9e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 09:31:51 compute-0 nova_compute[189491]: 2025-12-01 09:31:51.598 189495 DEBUG os_vif [None req-d2be117b-df19-4226-ae02-2a483045bb6b 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:03:b9:7c,bridge_name='br-int',has_traffic_filtering=True,id=213d57d5-9e28-4606-937a-97375a401f82,network=Network(52d15875-2a2e-463a-bc5d-8fa6b8466bff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap213d57d5-9e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  1 09:31:51 compute-0 nova_compute[189491]: 2025-12-01 09:31:51.600 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:31:51 compute-0 nova_compute[189491]: 2025-12-01 09:31:51.600 189495 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap213d57d5-9e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:31:51 compute-0 nova_compute[189491]: 2025-12-01 09:31:51.602 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:31:51 compute-0 nova_compute[189491]: 2025-12-01 09:31:51.604 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:31:51 compute-0 nova_compute[189491]: 2025-12-01 09:31:51.606 189495 INFO os_vif [None req-d2be117b-df19-4226-ae02-2a483045bb6b 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:03:b9:7c,bridge_name='br-int',has_traffic_filtering=True,id=213d57d5-9e28-4606-937a-97375a401f82,network=Network(52d15875-2a2e-463a-bc5d-8fa6b8466bff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap213d57d5-9e')#033[00m
Dec  1 09:31:51 compute-0 nova_compute[189491]: 2025-12-01 09:31:51.607 189495 INFO nova.virt.libvirt.driver [None req-d2be117b-df19-4226-ae02-2a483045bb6b 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 11a8e94c-61e3-4805-b688-e4b9121b30ba] Deleting instance files /var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba_del#033[00m
Dec  1 09:31:51 compute-0 nova_compute[189491]: 2025-12-01 09:31:51.608 189495 INFO nova.virt.libvirt.driver [None req-d2be117b-df19-4226-ae02-2a483045bb6b 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 11a8e94c-61e3-4805-b688-e4b9121b30ba] Deletion of /var/lib/nova/instances/11a8e94c-61e3-4805-b688-e4b9121b30ba_del complete#033[00m
Dec  1 09:31:51 compute-0 nova_compute[189491]: 2025-12-01 09:31:51.685 189495 DEBUG nova.virt.libvirt.host [None req-d2be117b-df19-4226-ae02-2a483045bb6b 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754#033[00m
Dec  1 09:31:51 compute-0 nova_compute[189491]: 2025-12-01 09:31:51.685 189495 INFO nova.virt.libvirt.host [None req-d2be117b-df19-4226-ae02-2a483045bb6b 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] UEFI support detected#033[00m
Dec  1 09:31:51 compute-0 nova_compute[189491]: 2025-12-01 09:31:51.688 189495 INFO nova.compute.manager [None req-d2be117b-df19-4226-ae02-2a483045bb6b 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 11a8e94c-61e3-4805-b688-e4b9121b30ba] Took 0.39 seconds to destroy the instance on the hypervisor.#033[00m
Dec  1 09:31:51 compute-0 nova_compute[189491]: 2025-12-01 09:31:51.689 189495 DEBUG oslo.service.loopingcall [None req-d2be117b-df19-4226-ae02-2a483045bb6b 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  1 09:31:51 compute-0 nova_compute[189491]: 2025-12-01 09:31:51.689 189495 DEBUG nova.compute.manager [-] [instance: 11a8e94c-61e3-4805-b688-e4b9121b30ba] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  1 09:31:51 compute-0 nova_compute[189491]: 2025-12-01 09:31:51.689 189495 DEBUG nova.network.neutron [-] [instance: 11a8e94c-61e3-4805-b688-e4b9121b30ba] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  1 09:31:51 compute-0 rsyslogd[236849]: message too long (8192) with configured size 8096, begin of message is: 2025-12-01 09:31:51.596 189495 DEBUG nova.virt.libvirt.vif [None req-d2be117b-df [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec  1 09:31:52 compute-0 nova_compute[189491]: 2025-12-01 09:31:52.037 189495 DEBUG nova.compute.manager [req-a1d25b00-ac34-47c1-805f-1a9e0474b07f req-1cfc2966-917c-47f4-a7a0-4441b0980301 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 11a8e94c-61e3-4805-b688-e4b9121b30ba] Received event network-vif-unplugged-213d57d5-9e28-4606-937a-97375a401f82 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 09:31:52 compute-0 nova_compute[189491]: 2025-12-01 09:31:52.037 189495 DEBUG oslo_concurrency.lockutils [req-a1d25b00-ac34-47c1-805f-1a9e0474b07f req-1cfc2966-917c-47f4-a7a0-4441b0980301 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Acquiring lock "11a8e94c-61e3-4805-b688-e4b9121b30ba-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:31:52 compute-0 nova_compute[189491]: 2025-12-01 09:31:52.037 189495 DEBUG oslo_concurrency.lockutils [req-a1d25b00-ac34-47c1-805f-1a9e0474b07f req-1cfc2966-917c-47f4-a7a0-4441b0980301 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Lock "11a8e94c-61e3-4805-b688-e4b9121b30ba-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:31:52 compute-0 nova_compute[189491]: 2025-12-01 09:31:52.038 189495 DEBUG oslo_concurrency.lockutils [req-a1d25b00-ac34-47c1-805f-1a9e0474b07f req-1cfc2966-917c-47f4-a7a0-4441b0980301 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Lock "11a8e94c-61e3-4805-b688-e4b9121b30ba-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:31:52 compute-0 nova_compute[189491]: 2025-12-01 09:31:52.038 189495 DEBUG nova.compute.manager [req-a1d25b00-ac34-47c1-805f-1a9e0474b07f req-1cfc2966-917c-47f4-a7a0-4441b0980301 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 11a8e94c-61e3-4805-b688-e4b9121b30ba] No waiting events found dispatching network-vif-unplugged-213d57d5-9e28-4606-937a-97375a401f82 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 09:31:52 compute-0 nova_compute[189491]: 2025-12-01 09:31:52.038 189495 DEBUG nova.compute.manager [req-a1d25b00-ac34-47c1-805f-1a9e0474b07f req-1cfc2966-917c-47f4-a7a0-4441b0980301 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 11a8e94c-61e3-4805-b688-e4b9121b30ba] Received event network-vif-unplugged-213d57d5-9e28-4606-937a-97375a401f82 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  1 09:31:52 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:31:52.086 106659 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:2b:76', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'f6:fe:a3:90:0a:20'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 09:31:52 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:31:52.088 106659 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  1 09:31:52 compute-0 nova_compute[189491]: 2025-12-01 09:31:52.087 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:31:52 compute-0 nova_compute[189491]: 2025-12-01 09:31:52.710 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:31:52 compute-0 nova_compute[189491]: 2025-12-01 09:31:52.931 189495 DEBUG nova.compute.manager [req-21d6dddb-d922-4d25-888e-9f930bdd305a req-518944f5-be6e-4dc3-9d99-c75dbda50ea8 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 11a8e94c-61e3-4805-b688-e4b9121b30ba] Received event network-changed-213d57d5-9e28-4606-937a-97375a401f82 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 09:31:52 compute-0 nova_compute[189491]: 2025-12-01 09:31:52.932 189495 DEBUG nova.compute.manager [req-21d6dddb-d922-4d25-888e-9f930bdd305a req-518944f5-be6e-4dc3-9d99-c75dbda50ea8 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 11a8e94c-61e3-4805-b688-e4b9121b30ba] Refreshing instance network info cache due to event network-changed-213d57d5-9e28-4606-937a-97375a401f82. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  1 09:31:52 compute-0 nova_compute[189491]: 2025-12-01 09:31:52.932 189495 DEBUG oslo_concurrency.lockutils [req-21d6dddb-d922-4d25-888e-9f930bdd305a req-518944f5-be6e-4dc3-9d99-c75dbda50ea8 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Acquiring lock "refresh_cache-11a8e94c-61e3-4805-b688-e4b9121b30ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 09:31:52 compute-0 nova_compute[189491]: 2025-12-01 09:31:52.932 189495 DEBUG oslo_concurrency.lockutils [req-21d6dddb-d922-4d25-888e-9f930bdd305a req-518944f5-be6e-4dc3-9d99-c75dbda50ea8 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Acquired lock "refresh_cache-11a8e94c-61e3-4805-b688-e4b9121b30ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 09:31:52 compute-0 nova_compute[189491]: 2025-12-01 09:31:52.932 189495 DEBUG nova.network.neutron [req-21d6dddb-d922-4d25-888e-9f930bdd305a req-518944f5-be6e-4dc3-9d99-c75dbda50ea8 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 11a8e94c-61e3-4805-b688-e4b9121b30ba] Refreshing network info cache for port 213d57d5-9e28-4606-937a-97375a401f82 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  1 09:31:53 compute-0 nova_compute[189491]: 2025-12-01 09:31:53.350 189495 DEBUG nova.network.neutron [-] [instance: 11a8e94c-61e3-4805-b688-e4b9121b30ba] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 09:31:53 compute-0 nova_compute[189491]: 2025-12-01 09:31:53.373 189495 INFO nova.compute.manager [-] [instance: 11a8e94c-61e3-4805-b688-e4b9121b30ba] Took 1.68 seconds to deallocate network for instance.#033[00m
Dec  1 09:31:53 compute-0 nova_compute[189491]: 2025-12-01 09:31:53.420 189495 DEBUG oslo_concurrency.lockutils [None req-d2be117b-df19-4226-ae02-2a483045bb6b 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:31:53 compute-0 nova_compute[189491]: 2025-12-01 09:31:53.421 189495 DEBUG oslo_concurrency.lockutils [None req-d2be117b-df19-4226-ae02-2a483045bb6b 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:31:53 compute-0 nova_compute[189491]: 2025-12-01 09:31:53.552 189495 DEBUG nova.compute.provider_tree [None req-d2be117b-df19-4226-ae02-2a483045bb6b 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Inventory has not changed in ProviderTree for provider: 143c7fe7-af1f-477a-978c-6a994d785d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 09:31:53 compute-0 nova_compute[189491]: 2025-12-01 09:31:53.570 189495 DEBUG nova.scheduler.client.report [None req-d2be117b-df19-4226-ae02-2a483045bb6b 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Inventory has not changed for provider 143c7fe7-af1f-477a-978c-6a994d785d98 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 09:31:53 compute-0 nova_compute[189491]: 2025-12-01 09:31:53.593 189495 DEBUG oslo_concurrency.lockutils [None req-d2be117b-df19-4226-ae02-2a483045bb6b 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.172s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:31:53 compute-0 nova_compute[189491]: 2025-12-01 09:31:53.633 189495 INFO nova.scheduler.client.report [None req-d2be117b-df19-4226-ae02-2a483045bb6b 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Deleted allocations for instance 11a8e94c-61e3-4805-b688-e4b9121b30ba#033[00m
Dec  1 09:31:53 compute-0 nova_compute[189491]: 2025-12-01 09:31:53.724 189495 DEBUG oslo_concurrency.lockutils [None req-d2be117b-df19-4226-ae02-2a483045bb6b 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lock "11a8e94c-61e3-4805-b688-e4b9121b30ba" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.432s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:31:54 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:31:54.090 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=203a4433-d8f4-4d80-8084-548a6d57cd5d, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:31:54 compute-0 nova_compute[189491]: 2025-12-01 09:31:54.227 189495 DEBUG nova.network.neutron [req-21d6dddb-d922-4d25-888e-9f930bdd305a req-518944f5-be6e-4dc3-9d99-c75dbda50ea8 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 11a8e94c-61e3-4805-b688-e4b9121b30ba] Updated VIF entry in instance network info cache for port 213d57d5-9e28-4606-937a-97375a401f82. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  1 09:31:54 compute-0 nova_compute[189491]: 2025-12-01 09:31:54.228 189495 DEBUG nova.network.neutron [req-21d6dddb-d922-4d25-888e-9f930bdd305a req-518944f5-be6e-4dc3-9d99-c75dbda50ea8 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 11a8e94c-61e3-4805-b688-e4b9121b30ba] Updating instance_info_cache with network_info: [{"id": "213d57d5-9e28-4606-937a-97375a401f82", "address": "fa:16:3e:03:b9:7c", "network": {"id": "52d15875-2a2e-463a-bc5d-8fa6b8466bff", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.178", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fac95b8a995a4174bfa966a8d9d9aa01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap213d57d5-9e", "ovs_interfaceid": "213d57d5-9e28-4606-937a-97375a401f82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 09:31:54 compute-0 nova_compute[189491]: 2025-12-01 09:31:54.323 189495 DEBUG nova.compute.manager [req-fc1c6f20-2e1a-4ae5-84fa-f8cb41943066 req-99521e6b-fd2d-44b0-bf83-5c356e01342c ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 11a8e94c-61e3-4805-b688-e4b9121b30ba] Received event network-vif-plugged-213d57d5-9e28-4606-937a-97375a401f82 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 09:31:54 compute-0 nova_compute[189491]: 2025-12-01 09:31:54.323 189495 DEBUG oslo_concurrency.lockutils [req-fc1c6f20-2e1a-4ae5-84fa-f8cb41943066 req-99521e6b-fd2d-44b0-bf83-5c356e01342c ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Acquiring lock "11a8e94c-61e3-4805-b688-e4b9121b30ba-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:31:54 compute-0 nova_compute[189491]: 2025-12-01 09:31:54.323 189495 DEBUG oslo_concurrency.lockutils [req-fc1c6f20-2e1a-4ae5-84fa-f8cb41943066 req-99521e6b-fd2d-44b0-bf83-5c356e01342c ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Lock "11a8e94c-61e3-4805-b688-e4b9121b30ba-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:31:54 compute-0 nova_compute[189491]: 2025-12-01 09:31:54.324 189495 DEBUG oslo_concurrency.lockutils [req-fc1c6f20-2e1a-4ae5-84fa-f8cb41943066 req-99521e6b-fd2d-44b0-bf83-5c356e01342c ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Lock "11a8e94c-61e3-4805-b688-e4b9121b30ba-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:31:54 compute-0 nova_compute[189491]: 2025-12-01 09:31:54.324 189495 DEBUG nova.compute.manager [req-fc1c6f20-2e1a-4ae5-84fa-f8cb41943066 req-99521e6b-fd2d-44b0-bf83-5c356e01342c ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 11a8e94c-61e3-4805-b688-e4b9121b30ba] No waiting events found dispatching network-vif-plugged-213d57d5-9e28-4606-937a-97375a401f82 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 09:31:54 compute-0 nova_compute[189491]: 2025-12-01 09:31:54.324 189495 WARNING nova.compute.manager [req-fc1c6f20-2e1a-4ae5-84fa-f8cb41943066 req-99521e6b-fd2d-44b0-bf83-5c356e01342c ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 11a8e94c-61e3-4805-b688-e4b9121b30ba] Received unexpected event network-vif-plugged-213d57d5-9e28-4606-937a-97375a401f82 for instance with vm_state deleted and task_state None.#033[00m
Dec  1 09:31:54 compute-0 nova_compute[189491]: 2025-12-01 09:31:54.341 189495 DEBUG oslo_concurrency.lockutils [req-21d6dddb-d922-4d25-888e-9f930bdd305a req-518944f5-be6e-4dc3-9d99-c75dbda50ea8 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Releasing lock "refresh_cache-11a8e94c-61e3-4805-b688-e4b9121b30ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 09:31:56 compute-0 nova_compute[189491]: 2025-12-01 09:31:56.603 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:31:57 compute-0 nova_compute[189491]: 2025-12-01 09:31:57.713 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:31:58 compute-0 podman[246368]: 2025-12-01 09:31:58.752018701 +0000 UTC m=+0.095237269 container health_status 6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  1 09:31:58 compute-0 podman[246369]: 2025-12-01 09:31:58.763764446 +0000 UTC m=+0.111043162 container health_status ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_id=edpm)
Dec  1 09:31:59 compute-0 podman[203700]: time="2025-12-01T09:31:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 09:31:59 compute-0 podman[203700]: @ - - [01/Dec/2025:09:31:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Dec  1 09:31:59 compute-0 podman[203700]: @ - - [01/Dec/2025:09:31:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4809 "" "Go-http-client/1.1"
Dec  1 09:32:01 compute-0 openstack_network_exporter[205866]: ERROR   09:32:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:32:01 compute-0 openstack_network_exporter[205866]: ERROR   09:32:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:32:01 compute-0 openstack_network_exporter[205866]: ERROR   09:32:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 09:32:01 compute-0 openstack_network_exporter[205866]: ERROR   09:32:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 09:32:01 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:32:01 compute-0 openstack_network_exporter[205866]: ERROR   09:32:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 09:32:01 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:32:01 compute-0 nova_compute[189491]: 2025-12-01 09:32:01.609 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:32:02 compute-0 nova_compute[189491]: 2025-12-01 09:32:02.715 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:32:05 compute-0 podman[246412]: 2025-12-01 09:32:05.711106845 +0000 UTC m=+0.084444237 container health_status e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  1 09:32:06 compute-0 nova_compute[189491]: 2025-12-01 09:32:06.577 189495 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764581511.5768845, 11a8e94c-61e3-4805-b688-e4b9121b30ba => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 09:32:06 compute-0 nova_compute[189491]: 2025-12-01 09:32:06.578 189495 INFO nova.compute.manager [-] [instance: 11a8e94c-61e3-4805-b688-e4b9121b30ba] VM Stopped (Lifecycle Event)#033[00m
Dec  1 09:32:06 compute-0 nova_compute[189491]: 2025-12-01 09:32:06.613 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:32:06 compute-0 nova_compute[189491]: 2025-12-01 09:32:06.849 189495 DEBUG nova.compute.manager [None req-799a47a5-259c-469e-9f6b-5d1d0d0b55c4 - - - - - -] [instance: 11a8e94c-61e3-4805-b688-e4b9121b30ba] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 09:32:07 compute-0 nova_compute[189491]: 2025-12-01 09:32:07.717 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:32:09 compute-0 podman[246429]: 2025-12-01 09:32:09.723931507 +0000 UTC m=+0.088720442 container health_status dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  1 09:32:09 compute-0 podman[246430]: 2025-12-01 09:32:09.734445091 +0000 UTC m=+0.091015436 container health_status f2d8639519b361376780abb83cb5a5e341c4f412720fb5852b2a4d261a69c359 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, version=9.4, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.expose-services=, container_name=kepler, name=ubi9, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, com.redhat.component=ubi9-container, managed_by=edpm_ansible, release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, build-date=2024-09-18T21:23:30)
Dec  1 09:32:11 compute-0 nova_compute[189491]: 2025-12-01 09:32:11.615 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:32:12 compute-0 nova_compute[189491]: 2025-12-01 09:32:12.719 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:32:14 compute-0 podman[246473]: 2025-12-01 09:32:14.705080489 +0000 UTC m=+0.067632200 container health_status f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 09:32:14 compute-0 podman[246472]: 2025-12-01 09:32:14.713491502 +0000 UTC m=+0.080698215 container health_status 110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, vcs-type=git, version=9.6, maintainer=Red Hat, Inc., distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, container_name=openstack_network_exporter, io.buildah.version=1.33.7)
Dec  1 09:32:16 compute-0 nova_compute[189491]: 2025-12-01 09:32:16.618 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:32:17 compute-0 nova_compute[189491]: 2025-12-01 09:32:17.720 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:32:18 compute-0 podman[246510]: 2025-12-01 09:32:18.711126983 +0000 UTC m=+0.077256233 container health_status 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Dec  1 09:32:18 compute-0 podman[246511]: 2025-12-01 09:32:18.772753786 +0000 UTC m=+0.140525456 container health_status 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Dec  1 09:32:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:19.786 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  1 09:32:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:19.786 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  1 09:32:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:19.786 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b020>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:32:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:19.787 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7ff84c98b0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:32:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:19.787 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84dc55100>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:32:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:19.789 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:32:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:19.789 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:32:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:19.789 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:32:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:19.789 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84ca1c260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:32:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:19.789 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:32:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:19.789 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:32:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:19.789 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84ff01af0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:32:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:19.790 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bb30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:32:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:19.790 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:32:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:19.790 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bb60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:32:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:19.790 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:32:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:19.790 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bbc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:32:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:19.790 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:32:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:19.790 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:32:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:19.790 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:32:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:19.790 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:32:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:19.790 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b5f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:32:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:19.791 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:32:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:19.791 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98be60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:32:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:19.791 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84f216690>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:32:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:19.791 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c9896d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:32:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:19.791 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:32:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:19.791 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98a720>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:32:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:19.792 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:32:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:19.794 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '7ed22ffd-011d-48d7-962a-8606e471a59e', 'name': 'test_0', 'flavor': {'id': '719a52fe-7f4b-48c0-b9dc-6a91d4ec600c', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '304c689d-2799-45ae-8166-517d5fd107b2'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'fac95b8a995a4174bfa966a8d9d9aa01', 'user_id': '962a55152ff34fdda5eae1f8aee3a7a9', 'hostId': '8e7812466a0145cddd68754a1627db4b56b0a23f372c34432aca44f1', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  1 09:32:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:19.797 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '97dcaede-87ef-4c1c-a4a8-4ec9587cfe86', 'name': 'vn-a75cfa3-aohxquokylp7-2qxsn2rwux5j-vnf-gncrlbwrk3ge', 'flavor': {'id': '719a52fe-7f4b-48c0-b9dc-6a91d4ec600c', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '304c689d-2799-45ae-8166-517d5fd107b2'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'fac95b8a995a4174bfa966a8d9d9aa01', 'user_id': '962a55152ff34fdda5eae1f8aee3a7a9', 'hostId': '8e7812466a0145cddd68754a1627db4b56b0a23f372c34432aca44f1', 'status': 'active', 'metadata': {'metering.server_group': '1555a697-b0aa-4429-98e7-26e6671e228d'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  1 09:32:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:19.800 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '350d2bc4-8489-4a5a-991a-99e32671f962', 'name': 'vn-a75cfa3-5bcj5tw5woc6-eld5euc3zwia-vnf-qwzf3cpwxtqu', 'flavor': {'id': '719a52fe-7f4b-48c0-b9dc-6a91d4ec600c', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '304c689d-2799-45ae-8166-517d5fd107b2'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000003', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'fac95b8a995a4174bfa966a8d9d9aa01', 'user_id': '962a55152ff34fdda5eae1f8aee3a7a9', 'hostId': '8e7812466a0145cddd68754a1627db4b56b0a23f372c34432aca44f1', 'status': 'active', 'metadata': {'metering.server_group': '1555a697-b0aa-4429-98e7-26e6671e228d'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  1 09:32:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:19.801 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  1 09:32:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:19.801 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b020>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:32:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:19.801 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b020>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:32:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:19.801 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:32:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:19.802 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-01T09:32:19.801604) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:32:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:19.881 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:32:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:19.882 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:32:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:19.882 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:32:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:19.959 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:32:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:19.960 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:32:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:19.960 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.034 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.034 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.035 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.036 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.037 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7ff8501e1d00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.037 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.037 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84dc55100>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.037 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84dc55100>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.038 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.040 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-01T09:32:20.038129) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.064 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.allocation volume: 21831680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.065 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.065 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.091 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.092 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.092 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.115 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.allocation volume: 21635072 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.116 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.116 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.117 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.117 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7ff84c98b110>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.117 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.117 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b140>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.117 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b140>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.118 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.118 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.read.latency volume: 476643826 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.118 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.read.latency volume: 112985408 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.118 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-01T09:32:20.117943) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.118 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.read.latency volume: 87581444 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.118 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.read.latency volume: 623315277 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.119 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.read.latency volume: 99798863 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.119 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.read.latency volume: 80231981 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.119 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.read.latency volume: 451180044 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.119 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.read.latency volume: 71893061 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.119 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.read.latency volume: 57010170 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.120 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.120 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7ff84c98b1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.120 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.120 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.121 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.121 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.121 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.121 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.121 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-01T09:32:20.121101) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.122 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.122 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.122 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.122 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.123 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.123 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.123 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.124 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.124 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7ff84c98b200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.124 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.124 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.124 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.124 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.125 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.125 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-01T09:32:20.124826) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.125 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.125 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.125 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.write.bytes volume: 41840640 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.126 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.126 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.126 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.write.bytes volume: 41783296 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.127 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.127 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.127 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.127 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7ff84ca1c230>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.128 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.128 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84ca1c260>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.128 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84ca1c260>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.128 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.128 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-01T09:32:20.128281) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.152 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.178 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.200 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.201 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.201 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7ff84c98b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.201 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.201 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.201 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.202 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-01T09:32:20.201790) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.201 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.202 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.write.latency volume: 1809136387 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.202 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.write.latency volume: 11785635 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.203 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.203 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.write.latency volume: 664336258 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.203 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.write.latency volume: 9391906 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.203 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.203 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.write.latency volume: 1311172785 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.204 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.write.latency volume: 7508073 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.204 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.204 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.205 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7ff84c98b2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.205 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.205 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.205 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.205 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-01T09:32:20.205361) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.205 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.206 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.write.requests volume: 230 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.206 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.206 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.206 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.write.requests volume: 231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.206 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.207 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.207 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.write.requests volume: 229 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.207 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.207 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.208 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.208 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7ff84c98b620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.208 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.208 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84ff01af0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.208 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84ff01af0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.209 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-01T09:32:20.208605) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.208 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.213 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/network.incoming.bytes volume: 2220 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.217 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/network.incoming.bytes volume: 1570 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.220 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/network.incoming.bytes volume: 1654 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.221 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.221 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7ff84c98b680>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.221 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.221 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7ff84c98b320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.221 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.221 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.221 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.222 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-01T09:32:20.222037) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.222 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.224 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.224 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7ff84c98b920>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.224 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.225 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bb60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.225 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bb60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.225 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.225 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/network.incoming.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.225 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-01T09:32:20.225212) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.225 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/network.incoming.packets volume: 14 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.226 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/network.incoming.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.226 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.226 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7ff84c98b380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.227 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.227 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b3b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.227 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b3b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.227 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.228 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.228 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7ff84c98bb90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.228 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.228 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-01T09:32:20.227288) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.228 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bbc0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.228 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bbc0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.228 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.229 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.229 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.229 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-01T09:32:20.228803) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.229 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.230 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.230 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7ff84c98bbf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.230 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.230 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bc20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.230 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bc20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.230 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.230 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.231 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.231 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-01T09:32:20.230581) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.231 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.231 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.232 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7ff84c98bc80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.232 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.232 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bcb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.232 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bcb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.232 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.232 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/network.outgoing.bytes volume: 2342 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.232 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-01T09:32:20.232459) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.233 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/network.outgoing.bytes volume: 2356 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.233 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/network.outgoing.bytes volume: 2356 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.233 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.233 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7ff84c98bd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.233 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.233 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bd40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.234 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bd40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.234 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.234 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.234 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.234 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-01T09:32:20.234168) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.235 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.235 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.235 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7ff84c98bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.235 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.235 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7ff84c98b5c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.235 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.235 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b5f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.236 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b5f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.236 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.236 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-01T09:32:20.236102) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.236 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/memory.usage volume: 48.79296875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.236 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/memory.usage volume: 48.91796875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.236 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/memory.usage volume: 48.92578125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.237 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.237 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7ff84dc55040>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.237 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.237 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b650>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.237 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b650>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.237 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.237 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.238 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.238 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.238 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.238 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7ff84c98be30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.239 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.239 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98be60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.239 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98be60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.239 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.239 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.240 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.240 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.240 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-01T09:32:20.237738) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.240 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-01T09:32:20.239430) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.240 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.240 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7ff8503b1b80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.241 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.241 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84f216690>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.241 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84f216690>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.241 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.241 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/cpu volume: 40540000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.241 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/cpu volume: 38550000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.242 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/cpu volume: 36360000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.242 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.242 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7ff84dab3f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.242 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.242 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c9896d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.242 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c9896d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.242 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.243 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.243 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-01T09:32:20.241341) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.243 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.243 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-01T09:32:20.242910) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.243 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.243 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.244 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.244 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.244 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.244 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.245 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.245 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.245 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7ff84c98bec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.245 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.246 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bef0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.246 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bef0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.246 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.246 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.246 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-01T09:32:20.246187) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.246 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.246 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.247 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.247 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7ff84c98b170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.247 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.247 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98a720>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.247 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98a720>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.247 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.248 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-01T09:32:20.247801) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.248 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.248 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.248 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.248 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.249 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.249 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.249 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.249 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.250 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.250 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.250 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7ff84c98bf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.250 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.250 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bf80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.251 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bf80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.251 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.251 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.251 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.251 14 DEBUG ceilometer.compute.pollsters [-] 350d2bc4-8489-4a5a-991a-99e32671f962/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.252 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.252 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.252 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-01T09:32:20.251101) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.253 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.253 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.253 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.253 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.253 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.253 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.253 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.253 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.253 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.254 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.254 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.254 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.254 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.254 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.254 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.254 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.254 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.254 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.254 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.254 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.254 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.255 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.255 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.255 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:32:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:32:20.255 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:32:21 compute-0 nova_compute[189491]: 2025-12-01 09:32:21.621 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:32:22 compute-0 nova_compute[189491]: 2025-12-01 09:32:22.723 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:32:23 compute-0 nova_compute[189491]: 2025-12-01 09:32:23.713 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:32:23 compute-0 nova_compute[189491]: 2025-12-01 09:32:23.714 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  1 09:32:25 compute-0 ovn_controller[97794]: 2025-12-01T09:32:25Z|00053|memory_trim|INFO|Detected inactivity (last active 30007 ms ago): trimming memory
Dec  1 09:32:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:32:26.519 106659 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:32:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:32:26.520 106659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:32:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:32:26.520 106659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:32:26 compute-0 nova_compute[189491]: 2025-12-01 09:32:26.624 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:32:27 compute-0 nova_compute[189491]: 2025-12-01 09:32:27.725 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:32:29 compute-0 podman[246554]: 2025-12-01 09:32:29.719141142 +0000 UTC m=+0.079612199 container health_status ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm)
Dec  1 09:32:29 compute-0 podman[246553]: 2025-12-01 09:32:29.735310815 +0000 UTC m=+0.097607557 container health_status 6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  1 09:32:29 compute-0 podman[203700]: time="2025-12-01T09:32:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 09:32:29 compute-0 podman[203700]: @ - - [01/Dec/2025:09:32:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Dec  1 09:32:29 compute-0 podman[203700]: @ - - [01/Dec/2025:09:32:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4808 "" "Go-http-client/1.1"
Dec  1 09:32:30 compute-0 nova_compute[189491]: 2025-12-01 09:32:30.085 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:32:30 compute-0 nova_compute[189491]: 2025-12-01 09:32:30.086 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 09:32:30 compute-0 nova_compute[189491]: 2025-12-01 09:32:30.792 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "refresh_cache-350d2bc4-8489-4a5a-991a-99e32671f962" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 09:32:30 compute-0 nova_compute[189491]: 2025-12-01 09:32:30.793 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquired lock "refresh_cache-350d2bc4-8489-4a5a-991a-99e32671f962" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 09:32:30 compute-0 nova_compute[189491]: 2025-12-01 09:32:30.794 189495 DEBUG nova.network.neutron [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] [instance: 350d2bc4-8489-4a5a-991a-99e32671f962] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  1 09:32:31 compute-0 openstack_network_exporter[205866]: ERROR   09:32:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:32:31 compute-0 openstack_network_exporter[205866]: ERROR   09:32:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:32:31 compute-0 openstack_network_exporter[205866]: ERROR   09:32:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 09:32:31 compute-0 openstack_network_exporter[205866]: ERROR   09:32:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 09:32:31 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:32:31 compute-0 openstack_network_exporter[205866]: ERROR   09:32:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 09:32:31 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:32:31 compute-0 nova_compute[189491]: 2025-12-01 09:32:31.626 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:32:32 compute-0 nova_compute[189491]: 2025-12-01 09:32:32.315 189495 DEBUG nova.network.neutron [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] [instance: 350d2bc4-8489-4a5a-991a-99e32671f962] Updating instance_info_cache with network_info: [{"id": "a79ae82e-bfbc-4718-a23a-6d99c6057e19", "address": "fa:16:3e:da:68:61", "network": {"id": "52d15875-2a2e-463a-bc5d-8fa6b8466bff", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.209", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fac95b8a995a4174bfa966a8d9d9aa01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa79ae82e-bf", "ovs_interfaceid": "a79ae82e-bfbc-4718-a23a-6d99c6057e19", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 09:32:32 compute-0 nova_compute[189491]: 2025-12-01 09:32:32.333 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Releasing lock "refresh_cache-350d2bc4-8489-4a5a-991a-99e32671f962" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 09:32:32 compute-0 nova_compute[189491]: 2025-12-01 09:32:32.334 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] [instance: 350d2bc4-8489-4a5a-991a-99e32671f962] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  1 09:32:32 compute-0 nova_compute[189491]: 2025-12-01 09:32:32.729 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:32:33 compute-0 nova_compute[189491]: 2025-12-01 09:32:33.714 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:32:33 compute-0 nova_compute[189491]: 2025-12-01 09:32:33.714 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec  1 09:32:33 compute-0 nova_compute[189491]: 2025-12-01 09:32:33.729 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec  1 09:32:34 compute-0 nova_compute[189491]: 2025-12-01 09:32:34.728 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:32:34 compute-0 nova_compute[189491]: 2025-12-01 09:32:34.831 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:32:34 compute-0 nova_compute[189491]: 2025-12-01 09:32:34.831 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:32:34 compute-0 nova_compute[189491]: 2025-12-01 09:32:34.832 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:32:34 compute-0 nova_compute[189491]: 2025-12-01 09:32:34.832 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 09:32:34 compute-0 nova_compute[189491]: 2025-12-01 09:32:34.932 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:32:34 compute-0 nova_compute[189491]: 2025-12-01 09:32:34.989 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:32:34 compute-0 nova_compute[189491]: 2025-12-01 09:32:34.991 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:32:35 compute-0 nova_compute[189491]: 2025-12-01 09:32:35.050 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:32:35 compute-0 nova_compute[189491]: 2025-12-01 09:32:35.051 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:32:35 compute-0 nova_compute[189491]: 2025-12-01 09:32:35.112 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:32:35 compute-0 nova_compute[189491]: 2025-12-01 09:32:35.114 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:32:35 compute-0 nova_compute[189491]: 2025-12-01 09:32:35.203 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk.eph0 --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:32:35 compute-0 nova_compute[189491]: 2025-12-01 09:32:35.211 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:32:35 compute-0 nova_compute[189491]: 2025-12-01 09:32:35.271 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:32:35 compute-0 nova_compute[189491]: 2025-12-01 09:32:35.272 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:32:35 compute-0 nova_compute[189491]: 2025-12-01 09:32:35.334 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:32:35 compute-0 nova_compute[189491]: 2025-12-01 09:32:35.335 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:32:35 compute-0 nova_compute[189491]: 2025-12-01 09:32:35.395 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.eph0 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:32:35 compute-0 nova_compute[189491]: 2025-12-01 09:32:35.396 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:32:35 compute-0 nova_compute[189491]: 2025-12-01 09:32:35.454 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.eph0 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:32:35 compute-0 nova_compute[189491]: 2025-12-01 09:32:35.462 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/350d2bc4-8489-4a5a-991a-99e32671f962/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:32:35 compute-0 nova_compute[189491]: 2025-12-01 09:32:35.525 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/350d2bc4-8489-4a5a-991a-99e32671f962/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:32:35 compute-0 nova_compute[189491]: 2025-12-01 09:32:35.526 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/350d2bc4-8489-4a5a-991a-99e32671f962/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:32:35 compute-0 nova_compute[189491]: 2025-12-01 09:32:35.586 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/350d2bc4-8489-4a5a-991a-99e32671f962/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:32:35 compute-0 nova_compute[189491]: 2025-12-01 09:32:35.587 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/350d2bc4-8489-4a5a-991a-99e32671f962/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:32:35 compute-0 nova_compute[189491]: 2025-12-01 09:32:35.649 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/350d2bc4-8489-4a5a-991a-99e32671f962/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:32:35 compute-0 nova_compute[189491]: 2025-12-01 09:32:35.650 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/350d2bc4-8489-4a5a-991a-99e32671f962/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:32:35 compute-0 nova_compute[189491]: 2025-12-01 09:32:35.705 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/350d2bc4-8489-4a5a-991a-99e32671f962/disk.eph0 --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:32:36 compute-0 nova_compute[189491]: 2025-12-01 09:32:36.050 189495 WARNING nova.virt.libvirt.driver [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 09:32:36 compute-0 nova_compute[189491]: 2025-12-01 09:32:36.051 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4758MB free_disk=72.34038162231445GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 09:32:36 compute-0 nova_compute[189491]: 2025-12-01 09:32:36.052 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:32:36 compute-0 nova_compute[189491]: 2025-12-01 09:32:36.052 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:32:36 compute-0 nova_compute[189491]: 2025-12-01 09:32:36.128 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Instance 7ed22ffd-011d-48d7-962a-8606e471a59e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 09:32:36 compute-0 nova_compute[189491]: 2025-12-01 09:32:36.128 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Instance 350d2bc4-8489-4a5a-991a-99e32671f962 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 09:32:36 compute-0 nova_compute[189491]: 2025-12-01 09:32:36.128 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Instance 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 09:32:36 compute-0 nova_compute[189491]: 2025-12-01 09:32:36.129 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 09:32:36 compute-0 nova_compute[189491]: 2025-12-01 09:32:36.129 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=2048MB phys_disk=79GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 09:32:36 compute-0 nova_compute[189491]: 2025-12-01 09:32:36.143 189495 DEBUG nova.scheduler.client.report [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Refreshing inventories for resource provider 143c7fe7-af1f-477a-978c-6a994d785d98 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  1 09:32:36 compute-0 nova_compute[189491]: 2025-12-01 09:32:36.171 189495 DEBUG nova.scheduler.client.report [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Updating ProviderTree inventory for provider 143c7fe7-af1f-477a-978c-6a994d785d98 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  1 09:32:36 compute-0 nova_compute[189491]: 2025-12-01 09:32:36.171 189495 DEBUG nova.compute.provider_tree [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Updating inventory in ProviderTree for provider 143c7fe7-af1f-477a-978c-6a994d785d98 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  1 09:32:36 compute-0 nova_compute[189491]: 2025-12-01 09:32:36.206 189495 DEBUG nova.scheduler.client.report [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Refreshing aggregate associations for resource provider 143c7fe7-af1f-477a-978c-6a994d785d98, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  1 09:32:36 compute-0 nova_compute[189491]: 2025-12-01 09:32:36.228 189495 DEBUG nova.scheduler.client.report [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Refreshing trait associations for resource provider 143c7fe7-af1f-477a-978c-6a994d785d98, traits: COMPUTE_STORAGE_BUS_FDC,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_FMA3,HW_CPU_X86_SVM,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_AVX,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SHA,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_AVX2,HW_CPU_X86_ABM,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_USB,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_MMX,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE2,COMPUTE_ACCELERATORS,HW_CPU_X86_F16C,HW_CPU_X86_SSE42,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE41,HW_CPU_X86_BMI2,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE4A,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_CIRRUS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  1 09:32:36 compute-0 nova_compute[189491]: 2025-12-01 09:32:36.331 189495 DEBUG nova.compute.provider_tree [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Inventory has not changed in ProviderTree for provider: 143c7fe7-af1f-477a-978c-6a994d785d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 09:32:36 compute-0 nova_compute[189491]: 2025-12-01 09:32:36.370 189495 DEBUG nova.scheduler.client.report [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Inventory has not changed for provider 143c7fe7-af1f-477a-978c-6a994d785d98 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 09:32:36 compute-0 nova_compute[189491]: 2025-12-01 09:32:36.389 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 09:32:36 compute-0 nova_compute[189491]: 2025-12-01 09:32:36.389 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.337s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:32:36 compute-0 nova_compute[189491]: 2025-12-01 09:32:36.629 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:32:36 compute-0 nova_compute[189491]: 2025-12-01 09:32:36.713 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:32:36 compute-0 nova_compute[189491]: 2025-12-01 09:32:36.714 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:32:36 compute-0 nova_compute[189491]: 2025-12-01 09:32:36.714 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:32:36 compute-0 nova_compute[189491]: 2025-12-01 09:32:36.714 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:32:36 compute-0 nova_compute[189491]: 2025-12-01 09:32:36.714 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:32:36 compute-0 podman[246630]: 2025-12-01 09:32:36.733431436 +0000 UTC m=+0.097859462 container health_status e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3)
Dec  1 09:32:37 compute-0 nova_compute[189491]: 2025-12-01 09:32:37.726 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:32:37 compute-0 nova_compute[189491]: 2025-12-01 09:32:37.727 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:32:37 compute-0 nova_compute[189491]: 2025-12-01 09:32:37.727 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 09:32:37 compute-0 nova_compute[189491]: 2025-12-01 09:32:37.730 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:32:38 compute-0 nova_compute[189491]: 2025-12-01 09:32:38.713 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:32:40 compute-0 podman[246650]: 2025-12-01 09:32:40.693252932 +0000 UTC m=+0.064505243 container health_status dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  1 09:32:40 compute-0 podman[246651]: 2025-12-01 09:32:40.709923456 +0000 UTC m=+0.078041962 container health_status f2d8639519b361376780abb83cb5a5e341c4f412720fb5852b2a4d261a69c359 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, release=1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.expose-services=, architecture=x86_64, maintainer=Red Hat, Inc., managed_by=edpm_ansible, vendor=Red Hat, Inc., config_id=edpm, container_name=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, version=9.4, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=)
Dec  1 09:32:41 compute-0 nova_compute[189491]: 2025-12-01 09:32:41.630 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:32:42 compute-0 nova_compute[189491]: 2025-12-01 09:32:42.732 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:32:45 compute-0 podman[246692]: 2025-12-01 09:32:45.718581806 +0000 UTC m=+0.092197304 container health_status 110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, version=9.6, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, config_id=edpm, name=ubi9-minimal, release=1755695350, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  1 09:32:45 compute-0 podman[246693]: 2025-12-01 09:32:45.73853034 +0000 UTC m=+0.106798669 container health_status f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent)
Dec  1 09:32:46 compute-0 nova_compute[189491]: 2025-12-01 09:32:46.634 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:32:47 compute-0 nova_compute[189491]: 2025-12-01 09:32:47.734 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:32:49 compute-0 podman[246729]: 2025-12-01 09:32:49.718101446 +0000 UTC m=+0.092217446 container health_status 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  1 09:32:49 compute-0 podman[246730]: 2025-12-01 09:32:49.771882138 +0000 UTC m=+0.139410378 container health_status 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125)
Dec  1 09:32:51 compute-0 nova_compute[189491]: 2025-12-01 09:32:51.636 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:32:52 compute-0 nova_compute[189491]: 2025-12-01 09:32:52.736 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:32:56 compute-0 nova_compute[189491]: 2025-12-01 09:32:56.638 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:32:57 compute-0 nova_compute[189491]: 2025-12-01 09:32:57.739 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:32:59 compute-0 podman[203700]: time="2025-12-01T09:32:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 09:32:59 compute-0 podman[203700]: @ - - [01/Dec/2025:09:32:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Dec  1 09:32:59 compute-0 podman[203700]: @ - - [01/Dec/2025:09:32:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4809 "" "Go-http-client/1.1"
Dec  1 09:33:00 compute-0 podman[246772]: 2025-12-01 09:33:00.702851321 +0000 UTC m=+0.074159208 container health_status 6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  1 09:33:00 compute-0 podman[246773]: 2025-12-01 09:33:00.766074533 +0000 UTC m=+0.121385522 container health_status ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, io.buildah.version=1.41.4)
Dec  1 09:33:01 compute-0 openstack_network_exporter[205866]: ERROR   09:33:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:33:01 compute-0 openstack_network_exporter[205866]: ERROR   09:33:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:33:01 compute-0 openstack_network_exporter[205866]: ERROR   09:33:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 09:33:01 compute-0 openstack_network_exporter[205866]: ERROR   09:33:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 09:33:01 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:33:01 compute-0 openstack_network_exporter[205866]: ERROR   09:33:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 09:33:01 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:33:01 compute-0 nova_compute[189491]: 2025-12-01 09:33:01.640 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:33:02 compute-0 nova_compute[189491]: 2025-12-01 09:33:02.741 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:33:06 compute-0 nova_compute[189491]: 2025-12-01 09:33:06.642 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:33:07 compute-0 podman[246811]: 2025-12-01 09:33:07.693903374 +0000 UTC m=+0.067872966 container health_status e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 09:33:07 compute-0 nova_compute[189491]: 2025-12-01 09:33:07.744 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:33:11 compute-0 nova_compute[189491]: 2025-12-01 09:33:11.646 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:33:11 compute-0 podman[246830]: 2025-12-01 09:33:11.699962646 +0000 UTC m=+0.071226727 container health_status dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  1 09:33:11 compute-0 podman[246831]: 2025-12-01 09:33:11.711171847 +0000 UTC m=+0.077540580 container health_status f2d8639519b361376780abb83cb5a5e341c4f412720fb5852b2a4d261a69c359 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, distribution-scope=public, release-0.7.12=, vendor=Red Hat, Inc., io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, config_id=edpm, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., managed_by=edpm_ansible, release=1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9)
Dec  1 09:33:12 compute-0 nova_compute[189491]: 2025-12-01 09:33:12.746 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:33:16 compute-0 nova_compute[189491]: 2025-12-01 09:33:16.648 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:33:16 compute-0 podman[246871]: 2025-12-01 09:33:16.702670812 +0000 UTC m=+0.066418220 container health_status f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent)
Dec  1 09:33:16 compute-0 podman[246870]: 2025-12-01 09:33:16.71167384 +0000 UTC m=+0.079371874 container health_status 110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, config_id=edpm, maintainer=Red Hat, Inc., vcs-type=git, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, release=1755695350, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, container_name=openstack_network_exporter)
Dec  1 09:33:17 compute-0 nova_compute[189491]: 2025-12-01 09:33:17.749 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:33:20 compute-0 podman[246912]: 2025-12-01 09:33:20.704257312 +0000 UTC m=+0.073964813 container health_status 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec  1 09:33:20 compute-0 podman[246913]: 2025-12-01 09:33:20.738934751 +0000 UTC m=+0.107404942 container health_status 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  1 09:33:21 compute-0 nova_compute[189491]: 2025-12-01 09:33:21.651 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:33:22 compute-0 nova_compute[189491]: 2025-12-01 09:33:22.752 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:33:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:33:26.521 106659 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:33:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:33:26.521 106659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:33:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:33:26.522 106659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:33:26 compute-0 nova_compute[189491]: 2025-12-01 09:33:26.655 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:33:27 compute-0 nova_compute[189491]: 2025-12-01 09:33:27.755 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:33:29 compute-0 podman[203700]: time="2025-12-01T09:33:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 09:33:29 compute-0 podman[203700]: @ - - [01/Dec/2025:09:33:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Dec  1 09:33:29 compute-0 podman[203700]: @ - - [01/Dec/2025:09:33:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4812 "" "Go-http-client/1.1"
Dec  1 09:33:31 compute-0 openstack_network_exporter[205866]: ERROR   09:33:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:33:31 compute-0 openstack_network_exporter[205866]: ERROR   09:33:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:33:31 compute-0 openstack_network_exporter[205866]: ERROR   09:33:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 09:33:31 compute-0 openstack_network_exporter[205866]: ERROR   09:33:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 09:33:31 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:33:31 compute-0 openstack_network_exporter[205866]: ERROR   09:33:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 09:33:31 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:33:31 compute-0 nova_compute[189491]: 2025-12-01 09:33:31.657 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:33:31 compute-0 podman[246955]: 2025-12-01 09:33:31.68379549 +0000 UTC m=+0.059460141 container health_status 6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  1 09:33:31 compute-0 podman[246956]: 2025-12-01 09:33:31.692641145 +0000 UTC m=+0.066595895 container health_status ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  1 09:33:31 compute-0 nova_compute[189491]: 2025-12-01 09:33:31.715 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:33:31 compute-0 nova_compute[189491]: 2025-12-01 09:33:31.715 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 09:33:32 compute-0 nova_compute[189491]: 2025-12-01 09:33:32.757 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:33:32 compute-0 nova_compute[189491]: 2025-12-01 09:33:32.817 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "refresh_cache-97dcaede-87ef-4c1c-a4a8-4ec9587cfe86" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 09:33:32 compute-0 nova_compute[189491]: 2025-12-01 09:33:32.817 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquired lock "refresh_cache-97dcaede-87ef-4c1c-a4a8-4ec9587cfe86" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 09:33:32 compute-0 nova_compute[189491]: 2025-12-01 09:33:32.818 189495 DEBUG nova.network.neutron [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] [instance: 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  1 09:33:34 compute-0 nova_compute[189491]: 2025-12-01 09:33:34.839 189495 DEBUG nova.network.neutron [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] [instance: 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86] Updating instance_info_cache with network_info: [{"id": "609b09f2-6c63-41e7-9850-15c0098f35b4", "address": "fa:16:3e:40:39:1e", "network": {"id": "52d15875-2a2e-463a-bc5d-8fa6b8466bff", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fac95b8a995a4174bfa966a8d9d9aa01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap609b09f2-6c", "ovs_interfaceid": "609b09f2-6c63-41e7-9850-15c0098f35b4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 09:33:34 compute-0 nova_compute[189491]: 2025-12-01 09:33:34.927 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Releasing lock "refresh_cache-97dcaede-87ef-4c1c-a4a8-4ec9587cfe86" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 09:33:34 compute-0 nova_compute[189491]: 2025-12-01 09:33:34.928 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] [instance: 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  1 09:33:34 compute-0 nova_compute[189491]: 2025-12-01 09:33:34.929 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:33:35 compute-0 nova_compute[189491]: 2025-12-01 09:33:35.074 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:33:35 compute-0 nova_compute[189491]: 2025-12-01 09:33:35.074 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:33:35 compute-0 nova_compute[189491]: 2025-12-01 09:33:35.075 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:33:35 compute-0 nova_compute[189491]: 2025-12-01 09:33:35.075 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 09:33:35 compute-0 nova_compute[189491]: 2025-12-01 09:33:35.374 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:33:35 compute-0 nova_compute[189491]: 2025-12-01 09:33:35.441 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:33:35 compute-0 nova_compute[189491]: 2025-12-01 09:33:35.442 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:33:35 compute-0 nova_compute[189491]: 2025-12-01 09:33:35.502 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:33:35 compute-0 nova_compute[189491]: 2025-12-01 09:33:35.503 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:33:35 compute-0 nova_compute[189491]: 2025-12-01 09:33:35.565 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk.eph0 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:33:35 compute-0 nova_compute[189491]: 2025-12-01 09:33:35.567 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:33:35 compute-0 nova_compute[189491]: 2025-12-01 09:33:35.630 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk.eph0 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:33:35 compute-0 nova_compute[189491]: 2025-12-01 09:33:35.638 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:33:35 compute-0 nova_compute[189491]: 2025-12-01 09:33:35.705 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:33:35 compute-0 nova_compute[189491]: 2025-12-01 09:33:35.706 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:33:35 compute-0 nova_compute[189491]: 2025-12-01 09:33:35.762 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:33:35 compute-0 nova_compute[189491]: 2025-12-01 09:33:35.763 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:33:35 compute-0 nova_compute[189491]: 2025-12-01 09:33:35.819 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.eph0 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:33:35 compute-0 nova_compute[189491]: 2025-12-01 09:33:35.820 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:33:35 compute-0 nova_compute[189491]: 2025-12-01 09:33:35.916 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.eph0 --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:33:35 compute-0 nova_compute[189491]: 2025-12-01 09:33:35.923 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/350d2bc4-8489-4a5a-991a-99e32671f962/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:33:36 compute-0 nova_compute[189491]: 2025-12-01 09:33:35.999 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/350d2bc4-8489-4a5a-991a-99e32671f962/disk --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:33:36 compute-0 nova_compute[189491]: 2025-12-01 09:33:36.000 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/350d2bc4-8489-4a5a-991a-99e32671f962/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:33:36 compute-0 nova_compute[189491]: 2025-12-01 09:33:36.056 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/350d2bc4-8489-4a5a-991a-99e32671f962/disk --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:33:36 compute-0 nova_compute[189491]: 2025-12-01 09:33:36.057 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/350d2bc4-8489-4a5a-991a-99e32671f962/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:33:36 compute-0 nova_compute[189491]: 2025-12-01 09:33:36.115 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/350d2bc4-8489-4a5a-991a-99e32671f962/disk.eph0 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:33:36 compute-0 nova_compute[189491]: 2025-12-01 09:33:36.116 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/350d2bc4-8489-4a5a-991a-99e32671f962/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:33:36 compute-0 nova_compute[189491]: 2025-12-01 09:33:36.181 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/350d2bc4-8489-4a5a-991a-99e32671f962/disk.eph0 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:33:36 compute-0 nova_compute[189491]: 2025-12-01 09:33:36.571 189495 WARNING nova.virt.libvirt.driver [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 09:33:36 compute-0 nova_compute[189491]: 2025-12-01 09:33:36.572 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4760MB free_disk=72.34038162231445GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 09:33:36 compute-0 nova_compute[189491]: 2025-12-01 09:33:36.572 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:33:36 compute-0 nova_compute[189491]: 2025-12-01 09:33:36.573 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:33:36 compute-0 nova_compute[189491]: 2025-12-01 09:33:36.661 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:33:37 compute-0 nova_compute[189491]: 2025-12-01 09:33:37.266 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Instance 7ed22ffd-011d-48d7-962a-8606e471a59e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 09:33:37 compute-0 nova_compute[189491]: 2025-12-01 09:33:37.266 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Instance 350d2bc4-8489-4a5a-991a-99e32671f962 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 09:33:37 compute-0 nova_compute[189491]: 2025-12-01 09:33:37.266 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Instance 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 09:33:37 compute-0 nova_compute[189491]: 2025-12-01 09:33:37.267 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 09:33:37 compute-0 nova_compute[189491]: 2025-12-01 09:33:37.267 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=2048MB phys_disk=79GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 09:33:37 compute-0 nova_compute[189491]: 2025-12-01 09:33:37.432 189495 DEBUG nova.compute.provider_tree [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Inventory has not changed in ProviderTree for provider: 143c7fe7-af1f-477a-978c-6a994d785d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 09:33:37 compute-0 nova_compute[189491]: 2025-12-01 09:33:37.487 189495 DEBUG nova.scheduler.client.report [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Inventory has not changed for provider 143c7fe7-af1f-477a-978c-6a994d785d98 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 09:33:37 compute-0 nova_compute[189491]: 2025-12-01 09:33:37.489 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 09:33:37 compute-0 nova_compute[189491]: 2025-12-01 09:33:37.489 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.916s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:33:37 compute-0 nova_compute[189491]: 2025-12-01 09:33:37.760 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:33:38 compute-0 nova_compute[189491]: 2025-12-01 09:33:38.275 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:33:38 compute-0 nova_compute[189491]: 2025-12-01 09:33:38.303 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:33:38 compute-0 nova_compute[189491]: 2025-12-01 09:33:38.304 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:33:38 compute-0 nova_compute[189491]: 2025-12-01 09:33:38.305 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:33:38 compute-0 nova_compute[189491]: 2025-12-01 09:33:38.305 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:33:38 compute-0 podman[247032]: 2025-12-01 09:33:38.693573194 +0000 UTC m=+0.065255113 container health_status e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, tcib_managed=true)
Dec  1 09:33:38 compute-0 nova_compute[189491]: 2025-12-01 09:33:38.738 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:33:39 compute-0 nova_compute[189491]: 2025-12-01 09:33:39.713 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:33:39 compute-0 nova_compute[189491]: 2025-12-01 09:33:39.714 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 09:33:40 compute-0 nova_compute[189491]: 2025-12-01 09:33:40.715 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:33:41 compute-0 nova_compute[189491]: 2025-12-01 09:33:41.665 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:33:42 compute-0 podman[247052]: 2025-12-01 09:33:42.730497686 +0000 UTC m=+0.090118794 container health_status dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  1 09:33:42 compute-0 podman[247053]: 2025-12-01 09:33:42.746492574 +0000 UTC m=+0.115334236 container health_status f2d8639519b361376780abb83cb5a5e341c4f412720fb5852b2a4d261a69c359 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, version=9.4, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, name=ubi9, vcs-type=git, maintainer=Red Hat, Inc., io.buildah.version=1.29.0, com.redhat.component=ubi9-container, container_name=kepler, io.openshift.expose-services=)
Dec  1 09:33:42 compute-0 nova_compute[189491]: 2025-12-01 09:33:42.763 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:33:46 compute-0 nova_compute[189491]: 2025-12-01 09:33:46.669 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:33:47 compute-0 podman[247094]: 2025-12-01 09:33:47.703598336 +0000 UTC m=+0.061945611 container health_status f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec  1 09:33:47 compute-0 podman[247093]: 2025-12-01 09:33:47.716523419 +0000 UTC m=+0.079290461 container health_status 110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.openshift.expose-services=, name=ubi9-minimal, managed_by=edpm_ansible, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.buildah.version=1.33.7, version=9.6)
Dec  1 09:33:47 compute-0 nova_compute[189491]: 2025-12-01 09:33:47.764 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:33:51 compute-0 nova_compute[189491]: 2025-12-01 09:33:51.672 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:33:51 compute-0 podman[247131]: 2025-12-01 09:33:51.692516252 +0000 UTC m=+0.067951398 container health_status 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd)
Dec  1 09:33:51 compute-0 podman[247132]: 2025-12-01 09:33:51.737607924 +0000 UTC m=+0.102915614 container health_status 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 09:33:52 compute-0 nova_compute[189491]: 2025-12-01 09:33:52.617 189495 DEBUG oslo_concurrency.lockutils [None req-b5bac7cb-575b-4a9a-b947-947a24540a7f 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Acquiring lock "350d2bc4-8489-4a5a-991a-99e32671f962" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:33:52 compute-0 nova_compute[189491]: 2025-12-01 09:33:52.617 189495 DEBUG oslo_concurrency.lockutils [None req-b5bac7cb-575b-4a9a-b947-947a24540a7f 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lock "350d2bc4-8489-4a5a-991a-99e32671f962" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:33:52 compute-0 nova_compute[189491]: 2025-12-01 09:33:52.618 189495 DEBUG oslo_concurrency.lockutils [None req-b5bac7cb-575b-4a9a-b947-947a24540a7f 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Acquiring lock "350d2bc4-8489-4a5a-991a-99e32671f962-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:33:52 compute-0 nova_compute[189491]: 2025-12-01 09:33:52.618 189495 DEBUG oslo_concurrency.lockutils [None req-b5bac7cb-575b-4a9a-b947-947a24540a7f 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lock "350d2bc4-8489-4a5a-991a-99e32671f962-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:33:52 compute-0 nova_compute[189491]: 2025-12-01 09:33:52.619 189495 DEBUG oslo_concurrency.lockutils [None req-b5bac7cb-575b-4a9a-b947-947a24540a7f 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lock "350d2bc4-8489-4a5a-991a-99e32671f962-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:33:52 compute-0 nova_compute[189491]: 2025-12-01 09:33:52.621 189495 INFO nova.compute.manager [None req-b5bac7cb-575b-4a9a-b947-947a24540a7f 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 350d2bc4-8489-4a5a-991a-99e32671f962] Terminating instance#033[00m
Dec  1 09:33:52 compute-0 nova_compute[189491]: 2025-12-01 09:33:52.623 189495 DEBUG nova.compute.manager [None req-b5bac7cb-575b-4a9a-b947-947a24540a7f 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 350d2bc4-8489-4a5a-991a-99e32671f962] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  1 09:33:52 compute-0 kernel: tapa79ae82e-bf (unregistering): left promiscuous mode
Dec  1 09:33:52 compute-0 NetworkManager[56318]: <info>  [1764581632.6632] device (tapa79ae82e-bf): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  1 09:33:52 compute-0 nova_compute[189491]: 2025-12-01 09:33:52.673 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:33:52 compute-0 ovn_controller[97794]: 2025-12-01T09:33:52Z|00054|binding|INFO|Releasing lport a79ae82e-bfbc-4718-a23a-6d99c6057e19 from this chassis (sb_readonly=0)
Dec  1 09:33:52 compute-0 ovn_controller[97794]: 2025-12-01T09:33:52Z|00055|binding|INFO|Setting lport a79ae82e-bfbc-4718-a23a-6d99c6057e19 down in Southbound
Dec  1 09:33:52 compute-0 ovn_controller[97794]: 2025-12-01T09:33:52Z|00056|binding|INFO|Removing iface tapa79ae82e-bf ovn-installed in OVS
Dec  1 09:33:52 compute-0 nova_compute[189491]: 2025-12-01 09:33:52.687 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:33:52 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:33:52.697 106659 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:da:68:61 192.168.0.209'], port_security=['fa:16:3e:da:68:61 192.168.0.209'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-vdfkxa75cfa3-5bcj5tw5woc6-eld5euc3zwia-port-76rbqcpmcvz3', 'neutron:cidrs': '192.168.0.209/24', 'neutron:device_id': '350d2bc4-8489-4a5a-991a-99e32671f962', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-52d15875-2a2e-463a-bc5d-8fa6b8466bff', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-vdfkxa75cfa3-5bcj5tw5woc6-eld5euc3zwia-port-76rbqcpmcvz3', 'neutron:project_id': 'fac95b8a995a4174bfa966a8d9d9aa01', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a5a5e6d4-6211-447f-b3f6-e2120ff69d87', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.197', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=260b7b6c-4405-41e2-9dc8-1595161adaf8, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff7f12c3670>], logical_port=a79ae82e-bfbc-4718-a23a-6d99c6057e19) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff7f12c3670>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 09:33:52 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:33:52.699 106659 INFO neutron.agent.ovn.metadata.agent [-] Port a79ae82e-bfbc-4718-a23a-6d99c6057e19 in datapath 52d15875-2a2e-463a-bc5d-8fa6b8466bff unbound from our chassis#033[00m
Dec  1 09:33:52 compute-0 nova_compute[189491]: 2025-12-01 09:33:52.699 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:33:52 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:33:52.700 106659 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 52d15875-2a2e-463a-bc5d-8fa6b8466bff#033[00m
Dec  1 09:33:52 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:33:52.716 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[f2f9c0b5-36cb-4c77-9d26-6d5badf7d0f4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:33:52 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000003.scope: Deactivated successfully.
Dec  1 09:33:52 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000003.scope: Consumed 1min 48.161s CPU time.
Dec  1 09:33:52 compute-0 systemd-machined[155812]: Machine qemu-3-instance-00000003 terminated.
Dec  1 09:33:52 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:33:52.747 239843 DEBUG oslo.privsep.daemon [-] privsep: reply[56edd4de-2b2b-492e-83a4-2b1b93a74007]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:33:52 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:33:52.750 239843 DEBUG oslo.privsep.daemon [-] privsep: reply[ae2d7d52-5dd0-4099-a545-58b7f48432fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:33:52 compute-0 nova_compute[189491]: 2025-12-01 09:33:52.766 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:33:52 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:33:52.779 239843 DEBUG oslo.privsep.daemon [-] privsep: reply[27c61d5e-4358-448e-a58a-96671083adfe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:33:52 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:33:52.796 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[4c65defc-f2aa-4e9a-90ed-ba6e8fb20f0a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap52d15875-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d0:8c:a9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 13, 'rx_bytes': 616, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 13, 'rx_bytes': 616, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 383928, 'reachable_time': 18579, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 247188, 'error': None, 'target': 'ovnmeta-52d15875-2a2e-463a-bc5d-8fa6b8466bff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:33:52 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:33:52.814 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[11b01143-2400-4dae-85f1-65635b00dc58]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap52d15875-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 383943, 'tstamp': 383943}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 247189, 'error': None, 'target': 'ovnmeta-52d15875-2a2e-463a-bc5d-8fa6b8466bff', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap52d15875-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 383945, 'tstamp': 383945}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 247189, 'error': None, 'target': 'ovnmeta-52d15875-2a2e-463a-bc5d-8fa6b8466bff', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:33:52 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:33:52.817 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap52d15875-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:33:52 compute-0 nova_compute[189491]: 2025-12-01 09:33:52.819 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:33:52 compute-0 nova_compute[189491]: 2025-12-01 09:33:52.825 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:33:52 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:33:52.825 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap52d15875-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:33:52 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:33:52.825 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 09:33:52 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:33:52.826 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap52d15875-20, col_values=(('external_ids', {'iface-id': 'dbcd2eb8-9722-4ebb-b254-d57f599617d1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:33:52 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:33:52.826 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 09:33:52 compute-0 nova_compute[189491]: 2025-12-01 09:33:52.914 189495 INFO nova.virt.libvirt.driver [-] [instance: 350d2bc4-8489-4a5a-991a-99e32671f962] Instance destroyed successfully.#033[00m
Dec  1 09:33:52 compute-0 nova_compute[189491]: 2025-12-01 09:33:52.915 189495 DEBUG nova.objects.instance [None req-b5bac7cb-575b-4a9a-b947-947a24540a7f 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lazy-loading 'resources' on Instance uuid 350d2bc4-8489-4a5a-991a-99e32671f962 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 09:33:52 compute-0 nova_compute[189491]: 2025-12-01 09:33:52.939 189495 DEBUG nova.virt.libvirt.vif [None req-b5bac7cb-575b-4a9a-b947-947a24540a7f 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-01T09:24:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='vn-a75cfa3-5bcj5tw5woc6-eld5euc3zwia-vnf-qwzf3cpwxtqu',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-a75cfa3-5bcj5tw5woc6-eld5euc3zwia-vnf-qwzf3cpwxtqu',id=3,image_ref='304c689d-2799-45ae-8166-517d5fd107b2',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-01T09:24:17Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='1555a697-b0aa-4429-98e7-26e6671e228d'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='fac95b8a995a4174bfa966a8d9d9aa01',ramdisk_id='',reservation_id='r-l4ia17ve',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member',image_base_image_ref='304c689d-2799-45ae-8166-517d5fd107b2',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-01T09:24:17Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0wOTgxOTkwMDIxNzU4MjQ0NDQwPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTA5ODE5OTAwMjE3NTgyNDQ0NDA9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09MDk4MTk5MDAyMTc1ODI0NDQ0MD09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTA5ODE5OTAwMjE3NTgyNDQ0NDA9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0wOTgxOTkwMDIxNzU4MjQ0NDQwPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0wOTgxOTkwMDIxNzU4MjQ0NDQwPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKC
Dec  1 09:33:52 compute-0 nova_compute[189491]: Cclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09MDk4MTk5MDAyMTc1ODI0NDQ0MD09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTA5ODE5OTAwMjE3NTgyNDQ0NDA9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0wOTgxOTkwMDIxNzU4MjQ0NDQwPT0tLQo=',user_id='962a55152ff34fdda5eae1f8aee3a7a9',uuid=350d2bc4-8489-4a5a-991a-99e32671f962,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a79ae82e-bfbc-4718-a23a-6d99c6057e19", "address": "fa:16:3e:da:68:61", "network": {"id": "52d15875-2a2e-463a-bc5d-8fa6b8466bff", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.209", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fac95b8a995a4174bfa966a8d9d9aa01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa79ae82e-bf", "ovs_interfaceid": "a79ae82e-bfbc-4718-a23a-6d99c6057e19", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  1 09:33:52 compute-0 nova_compute[189491]: 2025-12-01 09:33:52.940 189495 DEBUG nova.network.os_vif_util [None req-b5bac7cb-575b-4a9a-b947-947a24540a7f 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Converting VIF {"id": "a79ae82e-bfbc-4718-a23a-6d99c6057e19", "address": "fa:16:3e:da:68:61", "network": {"id": "52d15875-2a2e-463a-bc5d-8fa6b8466bff", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.209", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fac95b8a995a4174bfa966a8d9d9aa01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa79ae82e-bf", "ovs_interfaceid": "a79ae82e-bfbc-4718-a23a-6d99c6057e19", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 09:33:52 compute-0 nova_compute[189491]: 2025-12-01 09:33:52.941 189495 DEBUG nova.network.os_vif_util [None req-b5bac7cb-575b-4a9a-b947-947a24540a7f 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:da:68:61,bridge_name='br-int',has_traffic_filtering=True,id=a79ae82e-bfbc-4718-a23a-6d99c6057e19,network=Network(52d15875-2a2e-463a-bc5d-8fa6b8466bff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapa79ae82e-bf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 09:33:52 compute-0 nova_compute[189491]: 2025-12-01 09:33:52.941 189495 DEBUG os_vif [None req-b5bac7cb-575b-4a9a-b947-947a24540a7f 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:da:68:61,bridge_name='br-int',has_traffic_filtering=True,id=a79ae82e-bfbc-4718-a23a-6d99c6057e19,network=Network(52d15875-2a2e-463a-bc5d-8fa6b8466bff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapa79ae82e-bf') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  1 09:33:52 compute-0 nova_compute[189491]: 2025-12-01 09:33:52.944 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:33:52 compute-0 nova_compute[189491]: 2025-12-01 09:33:52.944 189495 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa79ae82e-bf, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:33:52 compute-0 nova_compute[189491]: 2025-12-01 09:33:52.947 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:33:52 compute-0 nova_compute[189491]: 2025-12-01 09:33:52.948 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  1 09:33:52 compute-0 nova_compute[189491]: 2025-12-01 09:33:52.951 189495 INFO os_vif [None req-b5bac7cb-575b-4a9a-b947-947a24540a7f 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:da:68:61,bridge_name='br-int',has_traffic_filtering=True,id=a79ae82e-bfbc-4718-a23a-6d99c6057e19,network=Network(52d15875-2a2e-463a-bc5d-8fa6b8466bff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapa79ae82e-bf')#033[00m
Dec  1 09:33:52 compute-0 nova_compute[189491]: 2025-12-01 09:33:52.952 189495 INFO nova.virt.libvirt.driver [None req-b5bac7cb-575b-4a9a-b947-947a24540a7f 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 350d2bc4-8489-4a5a-991a-99e32671f962] Deleting instance files /var/lib/nova/instances/350d2bc4-8489-4a5a-991a-99e32671f962_del#033[00m
Dec  1 09:33:52 compute-0 nova_compute[189491]: 2025-12-01 09:33:52.953 189495 INFO nova.virt.libvirt.driver [None req-b5bac7cb-575b-4a9a-b947-947a24540a7f 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 350d2bc4-8489-4a5a-991a-99e32671f962] Deletion of /var/lib/nova/instances/350d2bc4-8489-4a5a-991a-99e32671f962_del complete#033[00m
Dec  1 09:33:53 compute-0 nova_compute[189491]: 2025-12-01 09:33:53.066 189495 INFO nova.compute.manager [None req-b5bac7cb-575b-4a9a-b947-947a24540a7f 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 350d2bc4-8489-4a5a-991a-99e32671f962] Took 0.44 seconds to destroy the instance on the hypervisor.#033[00m
Dec  1 09:33:53 compute-0 nova_compute[189491]: 2025-12-01 09:33:53.067 189495 DEBUG oslo.service.loopingcall [None req-b5bac7cb-575b-4a9a-b947-947a24540a7f 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  1 09:33:53 compute-0 nova_compute[189491]: 2025-12-01 09:33:53.067 189495 DEBUG nova.compute.manager [-] [instance: 350d2bc4-8489-4a5a-991a-99e32671f962] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  1 09:33:53 compute-0 nova_compute[189491]: 2025-12-01 09:33:53.067 189495 DEBUG nova.network.neutron [-] [instance: 350d2bc4-8489-4a5a-991a-99e32671f962] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  1 09:33:53 compute-0 nova_compute[189491]: 2025-12-01 09:33:53.185 189495 DEBUG nova.compute.manager [req-54f6cf23-4342-4e76-a1b3-c588a664079b req-c7b26eb7-14ad-4f9c-9466-d748bf0f077c ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 350d2bc4-8489-4a5a-991a-99e32671f962] Received event network-vif-unplugged-a79ae82e-bfbc-4718-a23a-6d99c6057e19 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 09:33:53 compute-0 nova_compute[189491]: 2025-12-01 09:33:53.185 189495 DEBUG oslo_concurrency.lockutils [req-54f6cf23-4342-4e76-a1b3-c588a664079b req-c7b26eb7-14ad-4f9c-9466-d748bf0f077c ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Acquiring lock "350d2bc4-8489-4a5a-991a-99e32671f962-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:33:53 compute-0 nova_compute[189491]: 2025-12-01 09:33:53.186 189495 DEBUG oslo_concurrency.lockutils [req-54f6cf23-4342-4e76-a1b3-c588a664079b req-c7b26eb7-14ad-4f9c-9466-d748bf0f077c ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Lock "350d2bc4-8489-4a5a-991a-99e32671f962-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:33:53 compute-0 nova_compute[189491]: 2025-12-01 09:33:53.186 189495 DEBUG oslo_concurrency.lockutils [req-54f6cf23-4342-4e76-a1b3-c588a664079b req-c7b26eb7-14ad-4f9c-9466-d748bf0f077c ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Lock "350d2bc4-8489-4a5a-991a-99e32671f962-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:33:53 compute-0 nova_compute[189491]: 2025-12-01 09:33:53.186 189495 DEBUG nova.compute.manager [req-54f6cf23-4342-4e76-a1b3-c588a664079b req-c7b26eb7-14ad-4f9c-9466-d748bf0f077c ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 350d2bc4-8489-4a5a-991a-99e32671f962] No waiting events found dispatching network-vif-unplugged-a79ae82e-bfbc-4718-a23a-6d99c6057e19 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 09:33:53 compute-0 nova_compute[189491]: 2025-12-01 09:33:53.186 189495 DEBUG nova.compute.manager [req-54f6cf23-4342-4e76-a1b3-c588a664079b req-c7b26eb7-14ad-4f9c-9466-d748bf0f077c ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 350d2bc4-8489-4a5a-991a-99e32671f962] Received event network-vif-unplugged-a79ae82e-bfbc-4718-a23a-6d99c6057e19 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  1 09:33:53 compute-0 rsyslogd[236849]: message too long (8192) with configured size 8096, begin of message is: 2025-12-01 09:33:52.939 189495 DEBUG nova.virt.libvirt.vif [None req-b5bac7cb-57 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec  1 09:33:53 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:33:53.641 106659 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:2b:76', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'f6:fe:a3:90:0a:20'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 09:33:53 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:33:53.642 106659 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  1 09:33:53 compute-0 nova_compute[189491]: 2025-12-01 09:33:53.642 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:33:53 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:33:53.644 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=203a4433-d8f4-4d80-8084-548a6d57cd5d, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:33:54 compute-0 nova_compute[189491]: 2025-12-01 09:33:54.648 189495 DEBUG nova.compute.manager [req-9d2738df-219a-4de4-92be-553da69b298a req-6debc92e-b3ef-4a0e-b985-6fc2d0445ccd ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 350d2bc4-8489-4a5a-991a-99e32671f962] Received event network-changed-a79ae82e-bfbc-4718-a23a-6d99c6057e19 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 09:33:54 compute-0 nova_compute[189491]: 2025-12-01 09:33:54.649 189495 DEBUG nova.compute.manager [req-9d2738df-219a-4de4-92be-553da69b298a req-6debc92e-b3ef-4a0e-b985-6fc2d0445ccd ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 350d2bc4-8489-4a5a-991a-99e32671f962] Refreshing instance network info cache due to event network-changed-a79ae82e-bfbc-4718-a23a-6d99c6057e19. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  1 09:33:54 compute-0 nova_compute[189491]: 2025-12-01 09:33:54.649 189495 DEBUG oslo_concurrency.lockutils [req-9d2738df-219a-4de4-92be-553da69b298a req-6debc92e-b3ef-4a0e-b985-6fc2d0445ccd ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Acquiring lock "refresh_cache-350d2bc4-8489-4a5a-991a-99e32671f962" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 09:33:54 compute-0 nova_compute[189491]: 2025-12-01 09:33:54.649 189495 DEBUG oslo_concurrency.lockutils [req-9d2738df-219a-4de4-92be-553da69b298a req-6debc92e-b3ef-4a0e-b985-6fc2d0445ccd ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Acquired lock "refresh_cache-350d2bc4-8489-4a5a-991a-99e32671f962" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 09:33:54 compute-0 nova_compute[189491]: 2025-12-01 09:33:54.649 189495 DEBUG nova.network.neutron [req-9d2738df-219a-4de4-92be-553da69b298a req-6debc92e-b3ef-4a0e-b985-6fc2d0445ccd ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 350d2bc4-8489-4a5a-991a-99e32671f962] Refreshing network info cache for port a79ae82e-bfbc-4718-a23a-6d99c6057e19 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  1 09:33:55 compute-0 nova_compute[189491]: 2025-12-01 09:33:55.125 189495 INFO nova.network.neutron [req-9d2738df-219a-4de4-92be-553da69b298a req-6debc92e-b3ef-4a0e-b985-6fc2d0445ccd ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 350d2bc4-8489-4a5a-991a-99e32671f962] Port a79ae82e-bfbc-4718-a23a-6d99c6057e19 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Dec  1 09:33:55 compute-0 nova_compute[189491]: 2025-12-01 09:33:55.125 189495 DEBUG nova.network.neutron [req-9d2738df-219a-4de4-92be-553da69b298a req-6debc92e-b3ef-4a0e-b985-6fc2d0445ccd ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 350d2bc4-8489-4a5a-991a-99e32671f962] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 09:33:55 compute-0 nova_compute[189491]: 2025-12-01 09:33:55.154 189495 DEBUG oslo_concurrency.lockutils [req-9d2738df-219a-4de4-92be-553da69b298a req-6debc92e-b3ef-4a0e-b985-6fc2d0445ccd ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Releasing lock "refresh_cache-350d2bc4-8489-4a5a-991a-99e32671f962" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 09:33:55 compute-0 nova_compute[189491]: 2025-12-01 09:33:55.384 189495 DEBUG nova.network.neutron [-] [instance: 350d2bc4-8489-4a5a-991a-99e32671f962] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 09:33:55 compute-0 nova_compute[189491]: 2025-12-01 09:33:55.390 189495 DEBUG nova.compute.manager [req-165f368b-2717-473d-9d03-15c9106324c9 req-c0602583-c253-43ca-9570-ec56263b42a4 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 350d2bc4-8489-4a5a-991a-99e32671f962] Received event network-vif-plugged-a79ae82e-bfbc-4718-a23a-6d99c6057e19 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 09:33:55 compute-0 nova_compute[189491]: 2025-12-01 09:33:55.390 189495 DEBUG oslo_concurrency.lockutils [req-165f368b-2717-473d-9d03-15c9106324c9 req-c0602583-c253-43ca-9570-ec56263b42a4 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Acquiring lock "350d2bc4-8489-4a5a-991a-99e32671f962-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:33:55 compute-0 nova_compute[189491]: 2025-12-01 09:33:55.391 189495 DEBUG oslo_concurrency.lockutils [req-165f368b-2717-473d-9d03-15c9106324c9 req-c0602583-c253-43ca-9570-ec56263b42a4 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Lock "350d2bc4-8489-4a5a-991a-99e32671f962-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:33:55 compute-0 nova_compute[189491]: 2025-12-01 09:33:55.391 189495 DEBUG oslo_concurrency.lockutils [req-165f368b-2717-473d-9d03-15c9106324c9 req-c0602583-c253-43ca-9570-ec56263b42a4 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Lock "350d2bc4-8489-4a5a-991a-99e32671f962-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:33:55 compute-0 nova_compute[189491]: 2025-12-01 09:33:55.391 189495 DEBUG nova.compute.manager [req-165f368b-2717-473d-9d03-15c9106324c9 req-c0602583-c253-43ca-9570-ec56263b42a4 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 350d2bc4-8489-4a5a-991a-99e32671f962] No waiting events found dispatching network-vif-plugged-a79ae82e-bfbc-4718-a23a-6d99c6057e19 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 09:33:55 compute-0 nova_compute[189491]: 2025-12-01 09:33:55.392 189495 WARNING nova.compute.manager [req-165f368b-2717-473d-9d03-15c9106324c9 req-c0602583-c253-43ca-9570-ec56263b42a4 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 350d2bc4-8489-4a5a-991a-99e32671f962] Received unexpected event network-vif-plugged-a79ae82e-bfbc-4718-a23a-6d99c6057e19 for instance with vm_state active and task_state deleting.#033[00m
Dec  1 09:33:55 compute-0 nova_compute[189491]: 2025-12-01 09:33:55.418 189495 INFO nova.compute.manager [-] [instance: 350d2bc4-8489-4a5a-991a-99e32671f962] Took 2.35 seconds to deallocate network for instance.#033[00m
Dec  1 09:33:55 compute-0 nova_compute[189491]: 2025-12-01 09:33:55.476 189495 DEBUG oslo_concurrency.lockutils [None req-b5bac7cb-575b-4a9a-b947-947a24540a7f 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:33:55 compute-0 nova_compute[189491]: 2025-12-01 09:33:55.477 189495 DEBUG oslo_concurrency.lockutils [None req-b5bac7cb-575b-4a9a-b947-947a24540a7f 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:33:55 compute-0 nova_compute[189491]: 2025-12-01 09:33:55.620 189495 DEBUG nova.compute.provider_tree [None req-b5bac7cb-575b-4a9a-b947-947a24540a7f 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Inventory has not changed in ProviderTree for provider: 143c7fe7-af1f-477a-978c-6a994d785d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 09:33:55 compute-0 nova_compute[189491]: 2025-12-01 09:33:55.636 189495 DEBUG nova.scheduler.client.report [None req-b5bac7cb-575b-4a9a-b947-947a24540a7f 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Inventory has not changed for provider 143c7fe7-af1f-477a-978c-6a994d785d98 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 09:33:55 compute-0 nova_compute[189491]: 2025-12-01 09:33:55.674 189495 DEBUG oslo_concurrency.lockutils [None req-b5bac7cb-575b-4a9a-b947-947a24540a7f 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.197s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:33:55 compute-0 nova_compute[189491]: 2025-12-01 09:33:55.712 189495 INFO nova.scheduler.client.report [None req-b5bac7cb-575b-4a9a-b947-947a24540a7f 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Deleted allocations for instance 350d2bc4-8489-4a5a-991a-99e32671f962#033[00m
Dec  1 09:33:55 compute-0 nova_compute[189491]: 2025-12-01 09:33:55.784 189495 DEBUG oslo_concurrency.lockutils [None req-b5bac7cb-575b-4a9a-b947-947a24540a7f 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lock "350d2bc4-8489-4a5a-991a-99e32671f962" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.167s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:33:57 compute-0 nova_compute[189491]: 2025-12-01 09:33:57.769 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:33:57 compute-0 nova_compute[189491]: 2025-12-01 09:33:57.946 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:33:59 compute-0 podman[203700]: time="2025-12-01T09:33:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 09:33:59 compute-0 podman[203700]: @ - - [01/Dec/2025:09:33:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Dec  1 09:33:59 compute-0 podman[203700]: @ - - [01/Dec/2025:09:33:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4806 "" "Go-http-client/1.1"
Dec  1 09:33:59 compute-0 rsyslogd[236849]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  1 09:33:59 compute-0 rsyslogd[236849]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  1 09:34:01 compute-0 openstack_network_exporter[205866]: ERROR   09:34:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:34:01 compute-0 openstack_network_exporter[205866]: ERROR   09:34:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:34:01 compute-0 openstack_network_exporter[205866]: ERROR   09:34:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 09:34:01 compute-0 openstack_network_exporter[205866]: ERROR   09:34:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 09:34:01 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:34:01 compute-0 openstack_network_exporter[205866]: ERROR   09:34:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 09:34:01 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:34:02 compute-0 podman[247213]: 2025-12-01 09:34:02.692197665 +0000 UTC m=+0.058820066 container health_status 6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  1 09:34:02 compute-0 podman[247214]: 2025-12-01 09:34:02.742524524 +0000 UTC m=+0.105080507 container health_status ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec  1 09:34:02 compute-0 nova_compute[189491]: 2025-12-01 09:34:02.771 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:34:02 compute-0 nova_compute[189491]: 2025-12-01 09:34:02.948 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:34:07 compute-0 nova_compute[189491]: 2025-12-01 09:34:07.772 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:34:07 compute-0 nova_compute[189491]: 2025-12-01 09:34:07.911 189495 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764581632.9095454, 350d2bc4-8489-4a5a-991a-99e32671f962 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 09:34:07 compute-0 nova_compute[189491]: 2025-12-01 09:34:07.912 189495 INFO nova.compute.manager [-] [instance: 350d2bc4-8489-4a5a-991a-99e32671f962] VM Stopped (Lifecycle Event)#033[00m
Dec  1 09:34:07 compute-0 nova_compute[189491]: 2025-12-01 09:34:07.938 189495 DEBUG nova.compute.manager [None req-7a17fb03-b32b-40d3-899b-31e238b8f08f - - - - - -] [instance: 350d2bc4-8489-4a5a-991a-99e32671f962] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 09:34:07 compute-0 nova_compute[189491]: 2025-12-01 09:34:07.950 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:34:09 compute-0 podman[247258]: 2025-12-01 09:34:09.735376125 +0000 UTC m=+0.096563510 container health_status e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Dec  1 09:34:12 compute-0 nova_compute[189491]: 2025-12-01 09:34:12.774 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:34:12 compute-0 nova_compute[189491]: 2025-12-01 09:34:12.953 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:34:13 compute-0 podman[247277]: 2025-12-01 09:34:13.716514749 +0000 UTC m=+0.087857519 container health_status f2d8639519b361376780abb83cb5a5e341c4f412720fb5852b2a4d261a69c359 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, release=1214.1726694543, com.redhat.component=ubi9-container, vcs-type=git, release-0.7.12=, build-date=2024-09-18T21:23:30, container_name=kepler, distribution-scope=public, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., name=ubi9, io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9)
Dec  1 09:34:13 compute-0 podman[247276]: 2025-12-01 09:34:13.746203098 +0000 UTC m=+0.119735551 container health_status dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 09:34:17 compute-0 nova_compute[189491]: 2025-12-01 09:34:17.777 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:34:17 compute-0 nova_compute[189491]: 2025-12-01 09:34:17.956 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:34:18 compute-0 podman[247318]: 2025-12-01 09:34:18.698713968 +0000 UTC m=+0.071460343 container health_status 110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, io.openshift.tags=minimal rhel9, vcs-type=git, vendor=Red Hat, Inc., container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, distribution-scope=public, io.openshift.expose-services=, name=ubi9-minimal, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, config_id=edpm, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc.)
Dec  1 09:34:18 compute-0 podman[247319]: 2025-12-01 09:34:18.730037427 +0000 UTC m=+0.094399398 container health_status f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec  1 09:34:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:19.785 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  1 09:34:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:19.786 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  1 09:34:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:19.786 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b020>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:34:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:19.787 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7ff84c98b0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:34:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:19.787 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84dc55100>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:34:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:19.788 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:34:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:19.788 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:34:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:19.788 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:34:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:19.788 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84ca1c260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:34:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:19.789 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:34:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:19.789 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:34:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:19.789 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84ff01af0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:34:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:19.789 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bb30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:34:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:19.789 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:34:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:19.789 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bb60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:34:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:19.789 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:34:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:19.789 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bbc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:34:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:19.789 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:34:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:19.790 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:34:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:19.790 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:34:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:19.790 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:34:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:19.790 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b5f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:34:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:19.790 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:34:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:19.790 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98be60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:34:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:19.790 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84f216690>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:34:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:19.790 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c9896d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:34:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:19.790 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:34:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:19.791 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98a720>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:34:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:19.791 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:34:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:19.795 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '7ed22ffd-011d-48d7-962a-8606e471a59e', 'name': 'test_0', 'flavor': {'id': '719a52fe-7f4b-48c0-b9dc-6a91d4ec600c', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '304c689d-2799-45ae-8166-517d5fd107b2'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'fac95b8a995a4174bfa966a8d9d9aa01', 'user_id': '962a55152ff34fdda5eae1f8aee3a7a9', 'hostId': '8e7812466a0145cddd68754a1627db4b56b0a23f372c34432aca44f1', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  1 09:34:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:19.800 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '97dcaede-87ef-4c1c-a4a8-4ec9587cfe86', 'name': 'vn-a75cfa3-aohxquokylp7-2qxsn2rwux5j-vnf-gncrlbwrk3ge', 'flavor': {'id': '719a52fe-7f4b-48c0-b9dc-6a91d4ec600c', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '304c689d-2799-45ae-8166-517d5fd107b2'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'fac95b8a995a4174bfa966a8d9d9aa01', 'user_id': '962a55152ff34fdda5eae1f8aee3a7a9', 'hostId': '8e7812466a0145cddd68754a1627db4b56b0a23f372c34432aca44f1', 'status': 'active', 'metadata': {'metering.server_group': '1555a697-b0aa-4429-98e7-26e6671e228d'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  1 09:34:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:19.801 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  1 09:34:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:19.801 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b020>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:34:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:19.801 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b020>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:34:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:19.802 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:34:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:19.803 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-01T09:34:19.802122) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:34:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:19.884 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:34:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:19.885 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:34:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:19.885 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:34:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:19.962 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:34:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:19.963 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:34:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:19.963 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:34:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:19.963 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  1 09:34:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:19.964 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7ff8501e1d00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:34:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:19.964 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  1 09:34:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:19.964 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84dc55100>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:34:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:19.964 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84dc55100>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:34:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:19.964 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:34:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:19.965 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-01T09:34:19.964512) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:34:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:19.986 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.allocation volume: 21831680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:34:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:19.986 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:34:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:19.987 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.012 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.012 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.012 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.013 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.013 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7ff84c98b110>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.013 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.013 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b140>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.013 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b140>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.013 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.014 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.read.latency volume: 476643826 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.014 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.read.latency volume: 112985408 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.014 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-01T09:34:20.013835) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.014 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.read.latency volume: 87581444 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.014 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.read.latency volume: 623315277 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.015 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.read.latency volume: 99798863 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.015 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.read.latency volume: 80231981 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.015 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.015 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7ff84c98b1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.015 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.015 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.015 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.016 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.016 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.016 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.016 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-01T09:34:20.016018) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.016 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.016 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.017 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.017 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.017 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.017 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7ff84c98b200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.017 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.018 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.018 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.018 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.018 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.018 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-01T09:34:20.018135) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.018 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.018 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.019 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.write.bytes volume: 41840640 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.019 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.019 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.019 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.019 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7ff84ca1c230>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.019 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.019 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84ca1c260>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.020 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84ca1c260>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.020 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.020 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-01T09:34:20.020157) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.042 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.070 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.070 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.070 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7ff84c98b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.071 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.071 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.071 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.071 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.071 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.write.latency volume: 1809136387 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.072 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.write.latency volume: 11785635 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.071 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-01T09:34:20.071353) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.072 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.072 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.write.latency volume: 664336258 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.072 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.write.latency volume: 9391906 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.072 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.073 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.073 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7ff84c98b2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.073 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.073 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.073 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.073 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.074 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.write.requests volume: 230 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.074 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-01T09:34:20.073810) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.074 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.074 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.074 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.write.requests volume: 231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.074 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.075 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.075 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.075 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7ff84c98b620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.075 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.075 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84ff01af0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.076 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84ff01af0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.076 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.076 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-01T09:34:20.076103) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.079 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/network.incoming.bytes volume: 2304 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.083 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/network.incoming.bytes volume: 1654 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.083 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.084 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7ff84c98b680>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.084 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.084 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7ff84c98b320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.084 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.084 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.084 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.084 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.084 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-01T09:34:20.084596) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.085 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.085 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7ff84c98b920>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.085 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.085 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bb60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.085 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bb60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.085 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.085 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.086 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-01T09:34:20.085706) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.086 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/network.incoming.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.086 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.086 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7ff84c98b380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.086 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.086 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b3b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.087 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b3b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.087 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.087 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.087 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7ff84c98bb90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.088 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-01T09:34:20.087170) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.088 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.088 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bbc0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.088 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bbc0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.088 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.088 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.088 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.089 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-01T09:34:20.088295) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.089 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.089 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7ff84c98bbf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.089 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.089 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bc20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.089 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bc20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.089 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.089 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.090 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.090 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.090 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7ff84c98bc80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.090 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.090 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bcb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.090 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bcb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.091 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.091 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/network.outgoing.bytes volume: 2342 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.091 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-01T09:34:20.089733) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.091 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/network.outgoing.bytes volume: 2356 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.091 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-01T09:34:20.091026) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.092 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.092 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7ff84c98bd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.092 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.092 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bd40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.092 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bd40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.092 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.092 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-01T09:34:20.092637) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.092 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.093 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.093 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.093 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7ff84c98bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.093 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.093 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7ff84c98b5c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.093 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.094 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b5f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.094 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b5f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.094 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.094 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/memory.usage volume: 48.79296875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.094 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-01T09:34:20.094223) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.094 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/memory.usage volume: 48.91796875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.095 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.095 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7ff84dc55040>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.095 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.095 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b650>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.095 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b650>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.095 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.095 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.095 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.096 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.096 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7ff84c98be30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.096 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.096 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-01T09:34:20.095410) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.096 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98be60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.096 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98be60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.096 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.096 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.097 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-01T09:34:20.096729) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.097 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.097 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.097 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7ff8503b1b80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.097 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.097 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84f216690>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.097 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84f216690>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.097 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.098 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/cpu volume: 41810000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.098 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/cpu volume: 39860000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.098 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.098 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7ff84dab3f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.098 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.098 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c9896d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.099 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c9896d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.099 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.099 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.099 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.099 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-01T09:34:20.097878) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.099 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-01T09:34:20.099270) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.099 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.100 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.100 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.100 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.100 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.100 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7ff84c98bec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.101 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.101 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bef0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.101 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bef0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.101 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.101 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.101 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.101 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.102 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7ff84c98b170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.102 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-01T09:34:20.101298) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.102 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.102 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98a720>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.102 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98a720>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.102 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.102 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.102 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-01T09:34:20.102523) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.102 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.103 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.103 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.103 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.103 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.104 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.104 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7ff84c98bf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.104 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.104 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bf80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.104 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bf80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.104 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.104 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.104 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.105 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.105 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-01T09:34:20.104491) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.105 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.105 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.105 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.105 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.106 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.106 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.106 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.106 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.106 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.106 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.106 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.106 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.106 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.106 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.106 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.106 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.106 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.106 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.106 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.106 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.107 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.107 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.107 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.107 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.107 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:34:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:34:20.107 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:34:22 compute-0 podman[247359]: 2025-12-01 09:34:22.695955102 +0000 UTC m=+0.071939464 container health_status 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_managed=true, container_name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  1 09:34:22 compute-0 podman[247360]: 2025-12-01 09:34:22.731925944 +0000 UTC m=+0.106088902 container health_status 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  1 09:34:22 compute-0 nova_compute[189491]: 2025-12-01 09:34:22.779 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:34:22 compute-0 nova_compute[189491]: 2025-12-01 09:34:22.958 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:34:24 compute-0 systemd-logind[792]: New session 29 of user zuul.
Dec  1 09:34:24 compute-0 systemd[1]: Started Session 29 of User zuul.
Dec  1 09:34:25 compute-0 python3[247581]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep ceilometer_agent_compute#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 09:34:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:34:26.522 106659 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:34:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:34:26.523 106659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:34:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:34:26.524 106659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:34:27 compute-0 nova_compute[189491]: 2025-12-01 09:34:27.782 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:34:27 compute-0 nova_compute[189491]: 2025-12-01 09:34:27.961 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:34:28 compute-0 ovn_controller[97794]: 2025-12-01T09:34:28Z|00057|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory
Dec  1 09:34:29 compute-0 podman[203700]: time="2025-12-01T09:34:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 09:34:29 compute-0 podman[203700]: @ - - [01/Dec/2025:09:34:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Dec  1 09:34:29 compute-0 podman[203700]: @ - - [01/Dec/2025:09:34:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4813 "" "Go-http-client/1.1"
Dec  1 09:34:31 compute-0 openstack_network_exporter[205866]: ERROR   09:34:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 09:34:31 compute-0 openstack_network_exporter[205866]: ERROR   09:34:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 09:34:31 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:34:31 compute-0 openstack_network_exporter[205866]: ERROR   09:34:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:34:31 compute-0 openstack_network_exporter[205866]: ERROR   09:34:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:34:31 compute-0 openstack_network_exporter[205866]: ERROR   09:34:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 09:34:31 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:34:32 compute-0 nova_compute[189491]: 2025-12-01 09:34:32.714 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:34:32 compute-0 nova_compute[189491]: 2025-12-01 09:34:32.715 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 09:34:32 compute-0 nova_compute[189491]: 2025-12-01 09:34:32.715 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 09:34:32 compute-0 nova_compute[189491]: 2025-12-01 09:34:32.785 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:34:32 compute-0 nova_compute[189491]: 2025-12-01 09:34:32.963 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:34:33 compute-0 podman[247619]: 2025-12-01 09:34:33.719000625 +0000 UTC m=+0.085265857 container health_status ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS)
Dec  1 09:34:33 compute-0 podman[247618]: 2025-12-01 09:34:33.730518144 +0000 UTC m=+0.096711954 container health_status 6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  1 09:34:33 compute-0 nova_compute[189491]: 2025-12-01 09:34:33.972 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "refresh_cache-7ed22ffd-011d-48d7-962a-8606e471a59e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 09:34:33 compute-0 nova_compute[189491]: 2025-12-01 09:34:33.973 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquired lock "refresh_cache-7ed22ffd-011d-48d7-962a-8606e471a59e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 09:34:33 compute-0 nova_compute[189491]: 2025-12-01 09:34:33.973 189495 DEBUG nova.network.neutron [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] [instance: 7ed22ffd-011d-48d7-962a-8606e471a59e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  1 09:34:33 compute-0 nova_compute[189491]: 2025-12-01 09:34:33.978 189495 DEBUG nova.objects.instance [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 7ed22ffd-011d-48d7-962a-8606e471a59e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 09:34:36 compute-0 nova_compute[189491]: 2025-12-01 09:34:36.256 189495 DEBUG nova.network.neutron [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] [instance: 7ed22ffd-011d-48d7-962a-8606e471a59e] Updating instance_info_cache with network_info: [{"id": "1632735e-15c5-4d6b-a450-baa001b88ac2", "address": "fa:16:3e:d4:bd:b4", "network": {"id": "52d15875-2a2e-463a-bc5d-8fa6b8466bff", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.55", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fac95b8a995a4174bfa966a8d9d9aa01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1632735e-15", "ovs_interfaceid": "1632735e-15c5-4d6b-a450-baa001b88ac2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 09:34:36 compute-0 nova_compute[189491]: 2025-12-01 09:34:36.283 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Releasing lock "refresh_cache-7ed22ffd-011d-48d7-962a-8606e471a59e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 09:34:36 compute-0 nova_compute[189491]: 2025-12-01 09:34:36.283 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] [instance: 7ed22ffd-011d-48d7-962a-8606e471a59e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  1 09:34:36 compute-0 nova_compute[189491]: 2025-12-01 09:34:36.284 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:34:36 compute-0 nova_compute[189491]: 2025-12-01 09:34:36.284 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:34:36 compute-0 nova_compute[189491]: 2025-12-01 09:34:36.315 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:34:36 compute-0 nova_compute[189491]: 2025-12-01 09:34:36.315 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:34:36 compute-0 nova_compute[189491]: 2025-12-01 09:34:36.316 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:34:36 compute-0 nova_compute[189491]: 2025-12-01 09:34:36.316 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 09:34:36 compute-0 nova_compute[189491]: 2025-12-01 09:34:36.411 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:34:36 compute-0 nova_compute[189491]: 2025-12-01 09:34:36.480 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:34:36 compute-0 nova_compute[189491]: 2025-12-01 09:34:36.481 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:34:36 compute-0 nova_compute[189491]: 2025-12-01 09:34:36.540 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:34:36 compute-0 nova_compute[189491]: 2025-12-01 09:34:36.541 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:34:36 compute-0 nova_compute[189491]: 2025-12-01 09:34:36.601 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk.eph0 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:34:36 compute-0 nova_compute[189491]: 2025-12-01 09:34:36.601 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:34:36 compute-0 nova_compute[189491]: 2025-12-01 09:34:36.663 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:34:36 compute-0 nova_compute[189491]: 2025-12-01 09:34:36.670 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:34:36 compute-0 nova_compute[189491]: 2025-12-01 09:34:36.730 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:34:36 compute-0 nova_compute[189491]: 2025-12-01 09:34:36.731 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:34:36 compute-0 nova_compute[189491]: 2025-12-01 09:34:36.791 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:34:36 compute-0 nova_compute[189491]: 2025-12-01 09:34:36.792 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:34:36 compute-0 nova_compute[189491]: 2025-12-01 09:34:36.850 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.eph0 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:34:36 compute-0 nova_compute[189491]: 2025-12-01 09:34:36.851 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:34:36 compute-0 nova_compute[189491]: 2025-12-01 09:34:36.909 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.eph0 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:34:37 compute-0 nova_compute[189491]: 2025-12-01 09:34:37.250 189495 WARNING nova.virt.libvirt.driver [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 09:34:37 compute-0 nova_compute[189491]: 2025-12-01 09:34:37.251 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4927MB free_disk=72.36184692382812GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 09:34:37 compute-0 nova_compute[189491]: 2025-12-01 09:34:37.251 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:34:37 compute-0 nova_compute[189491]: 2025-12-01 09:34:37.252 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:34:37 compute-0 nova_compute[189491]: 2025-12-01 09:34:37.348 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Instance 7ed22ffd-011d-48d7-962a-8606e471a59e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 09:34:37 compute-0 nova_compute[189491]: 2025-12-01 09:34:37.348 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Instance 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 09:34:37 compute-0 nova_compute[189491]: 2025-12-01 09:34:37.348 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 09:34:37 compute-0 nova_compute[189491]: 2025-12-01 09:34:37.349 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 09:34:37 compute-0 nova_compute[189491]: 2025-12-01 09:34:37.404 189495 DEBUG nova.compute.provider_tree [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Inventory has not changed in ProviderTree for provider: 143c7fe7-af1f-477a-978c-6a994d785d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 09:34:37 compute-0 nova_compute[189491]: 2025-12-01 09:34:37.419 189495 DEBUG nova.scheduler.client.report [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Inventory has not changed for provider 143c7fe7-af1f-477a-978c-6a994d785d98 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 09:34:37 compute-0 nova_compute[189491]: 2025-12-01 09:34:37.438 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 09:34:37 compute-0 nova_compute[189491]: 2025-12-01 09:34:37.438 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.187s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:34:37 compute-0 nova_compute[189491]: 2025-12-01 09:34:37.792 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:34:37 compute-0 nova_compute[189491]: 2025-12-01 09:34:37.967 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:34:38 compute-0 nova_compute[189491]: 2025-12-01 09:34:38.868 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:34:38 compute-0 nova_compute[189491]: 2025-12-01 09:34:38.869 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:34:39 compute-0 nova_compute[189491]: 2025-12-01 09:34:39.715 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:34:40 compute-0 podman[247682]: 2025-12-01 09:34:40.703464447 +0000 UTC m=+0.070908428 container health_status e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_ipmi)
Dec  1 09:34:40 compute-0 nova_compute[189491]: 2025-12-01 09:34:40.708 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:34:40 compute-0 nova_compute[189491]: 2025-12-01 09:34:40.713 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:34:40 compute-0 nova_compute[189491]: 2025-12-01 09:34:40.714 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:34:40 compute-0 nova_compute[189491]: 2025-12-01 09:34:40.714 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 09:34:42 compute-0 nova_compute[189491]: 2025-12-01 09:34:42.795 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:34:42 compute-0 nova_compute[189491]: 2025-12-01 09:34:42.970 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:34:42 compute-0 nova_compute[189491]: 2025-12-01 09:34:42.982 189495 DEBUG oslo_concurrency.lockutils [None req-e4c0039a-edea-4954-8d74-256f313ab2c3 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Acquiring lock "fb95197c-0dde-4cf7-ace7-4d00e40f5d0f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:34:42 compute-0 nova_compute[189491]: 2025-12-01 09:34:42.982 189495 DEBUG oslo_concurrency.lockutils [None req-e4c0039a-edea-4954-8d74-256f313ab2c3 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lock "fb95197c-0dde-4cf7-ace7-4d00e40f5d0f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:34:42 compute-0 nova_compute[189491]: 2025-12-01 09:34:42.998 189495 DEBUG nova.compute.manager [None req-e4c0039a-edea-4954-8d74-256f313ab2c3 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: fb95197c-0dde-4cf7-ace7-4d00e40f5d0f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  1 09:34:43 compute-0 nova_compute[189491]: 2025-12-01 09:34:43.067 189495 DEBUG oslo_concurrency.lockutils [None req-e4c0039a-edea-4954-8d74-256f313ab2c3 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:34:43 compute-0 nova_compute[189491]: 2025-12-01 09:34:43.068 189495 DEBUG oslo_concurrency.lockutils [None req-e4c0039a-edea-4954-8d74-256f313ab2c3 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:34:43 compute-0 nova_compute[189491]: 2025-12-01 09:34:43.077 189495 DEBUG nova.virt.hardware [None req-e4c0039a-edea-4954-8d74-256f313ab2c3 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  1 09:34:43 compute-0 nova_compute[189491]: 2025-12-01 09:34:43.077 189495 INFO nova.compute.claims [None req-e4c0039a-edea-4954-8d74-256f313ab2c3 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: fb95197c-0dde-4cf7-ace7-4d00e40f5d0f] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  1 09:34:43 compute-0 nova_compute[189491]: 2025-12-01 09:34:43.222 189495 DEBUG nova.compute.provider_tree [None req-e4c0039a-edea-4954-8d74-256f313ab2c3 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Inventory has not changed in ProviderTree for provider: 143c7fe7-af1f-477a-978c-6a994d785d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 09:34:43 compute-0 nova_compute[189491]: 2025-12-01 09:34:43.391 189495 DEBUG nova.scheduler.client.report [None req-e4c0039a-edea-4954-8d74-256f313ab2c3 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Inventory has not changed for provider 143c7fe7-af1f-477a-978c-6a994d785d98 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 09:34:43 compute-0 nova_compute[189491]: 2025-12-01 09:34:43.470 189495 DEBUG oslo_concurrency.lockutils [None req-e4c0039a-edea-4954-8d74-256f313ab2c3 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.402s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:34:43 compute-0 nova_compute[189491]: 2025-12-01 09:34:43.471 189495 DEBUG nova.compute.manager [None req-e4c0039a-edea-4954-8d74-256f313ab2c3 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: fb95197c-0dde-4cf7-ace7-4d00e40f5d0f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  1 09:34:43 compute-0 nova_compute[189491]: 2025-12-01 09:34:43.519 189495 DEBUG nova.compute.manager [None req-e4c0039a-edea-4954-8d74-256f313ab2c3 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: fb95197c-0dde-4cf7-ace7-4d00e40f5d0f] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948#033[00m
Dec  1 09:34:43 compute-0 nova_compute[189491]: 2025-12-01 09:34:43.552 189495 INFO nova.virt.libvirt.driver [None req-e4c0039a-edea-4954-8d74-256f313ab2c3 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: fb95197c-0dde-4cf7-ace7-4d00e40f5d0f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  1 09:34:43 compute-0 nova_compute[189491]: 2025-12-01 09:34:43.591 189495 DEBUG nova.compute.manager [None req-e4c0039a-edea-4954-8d74-256f313ab2c3 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: fb95197c-0dde-4cf7-ace7-4d00e40f5d0f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  1 09:34:43 compute-0 nova_compute[189491]: 2025-12-01 09:34:43.721 189495 DEBUG nova.compute.manager [None req-e4c0039a-edea-4954-8d74-256f313ab2c3 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: fb95197c-0dde-4cf7-ace7-4d00e40f5d0f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  1 09:34:43 compute-0 nova_compute[189491]: 2025-12-01 09:34:43.724 189495 DEBUG nova.virt.libvirt.driver [None req-e4c0039a-edea-4954-8d74-256f313ab2c3 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: fb95197c-0dde-4cf7-ace7-4d00e40f5d0f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  1 09:34:43 compute-0 nova_compute[189491]: 2025-12-01 09:34:43.725 189495 INFO nova.virt.libvirt.driver [None req-e4c0039a-edea-4954-8d74-256f313ab2c3 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: fb95197c-0dde-4cf7-ace7-4d00e40f5d0f] Creating image(s)#033[00m
Dec  1 09:34:43 compute-0 nova_compute[189491]: 2025-12-01 09:34:43.726 189495 DEBUG oslo_concurrency.lockutils [None req-e4c0039a-edea-4954-8d74-256f313ab2c3 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Acquiring lock "/var/lib/nova/instances/fb95197c-0dde-4cf7-ace7-4d00e40f5d0f/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:34:43 compute-0 nova_compute[189491]: 2025-12-01 09:34:43.726 189495 DEBUG oslo_concurrency.lockutils [None req-e4c0039a-edea-4954-8d74-256f313ab2c3 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lock "/var/lib/nova/instances/fb95197c-0dde-4cf7-ace7-4d00e40f5d0f/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:34:43 compute-0 nova_compute[189491]: 2025-12-01 09:34:43.727 189495 DEBUG oslo_concurrency.lockutils [None req-e4c0039a-edea-4954-8d74-256f313ab2c3 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lock "/var/lib/nova/instances/fb95197c-0dde-4cf7-ace7-4d00e40f5d0f/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:34:43 compute-0 nova_compute[189491]: 2025-12-01 09:34:43.727 189495 DEBUG oslo_concurrency.lockutils [None req-e4c0039a-edea-4954-8d74-256f313ab2c3 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Acquiring lock "3bf6c54845f5e9621e4fb27f7d70d848ea2fd366" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:34:43 compute-0 nova_compute[189491]: 2025-12-01 09:34:43.728 189495 DEBUG oslo_concurrency.lockutils [None req-e4c0039a-edea-4954-8d74-256f313ab2c3 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lock "3bf6c54845f5e9621e4fb27f7d70d848ea2fd366" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:34:44 compute-0 podman[247700]: 2025-12-01 09:34:44.692863673 +0000 UTC m=+0.068282566 container health_status dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  1 09:34:44 compute-0 podman[247701]: 2025-12-01 09:34:44.712073077 +0000 UTC m=+0.083176725 container health_status f2d8639519b361376780abb83cb5a5e341c4f412720fb5852b2a4d261a69c359 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, version=9.4, io.openshift.tags=base rhel9, managed_by=edpm_ansible, distribution-scope=public, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, name=ubi9, release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, build-date=2024-09-18T21:23:30, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., release=1214.1726694543, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc.)
Dec  1 09:34:44 compute-0 nova_compute[189491]: 2025-12-01 09:34:44.986 189495 DEBUG oslo_concurrency.processutils [None req-e4c0039a-edea-4954-8d74-256f313ab2c3 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3bf6c54845f5e9621e4fb27f7d70d848ea2fd366.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:34:45 compute-0 nova_compute[189491]: 2025-12-01 09:34:45.062 189495 DEBUG oslo_concurrency.processutils [None req-e4c0039a-edea-4954-8d74-256f313ab2c3 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3bf6c54845f5e9621e4fb27f7d70d848ea2fd366.part --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:34:45 compute-0 nova_compute[189491]: 2025-12-01 09:34:45.063 189495 DEBUG nova.virt.images [None req-e4c0039a-edea-4954-8d74-256f313ab2c3 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] 3543bf4f-9e23-4a08-9641-acb14b5c984b was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Dec  1 09:34:45 compute-0 nova_compute[189491]: 2025-12-01 09:34:45.064 189495 DEBUG nova.privsep.utils [None req-e4c0039a-edea-4954-8d74-256f313ab2c3 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Dec  1 09:34:45 compute-0 nova_compute[189491]: 2025-12-01 09:34:45.064 189495 DEBUG oslo_concurrency.processutils [None req-e4c0039a-edea-4954-8d74-256f313ab2c3 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/3bf6c54845f5e9621e4fb27f7d70d848ea2fd366.part /var/lib/nova/instances/_base/3bf6c54845f5e9621e4fb27f7d70d848ea2fd366.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:34:45 compute-0 nova_compute[189491]: 2025-12-01 09:34:45.285 189495 DEBUG oslo_concurrency.processutils [None req-e4c0039a-edea-4954-8d74-256f313ab2c3 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/3bf6c54845f5e9621e4fb27f7d70d848ea2fd366.part /var/lib/nova/instances/_base/3bf6c54845f5e9621e4fb27f7d70d848ea2fd366.converted" returned: 0 in 0.220s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:34:45 compute-0 nova_compute[189491]: 2025-12-01 09:34:45.289 189495 DEBUG oslo_concurrency.processutils [None req-e4c0039a-edea-4954-8d74-256f313ab2c3 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3bf6c54845f5e9621e4fb27f7d70d848ea2fd366.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:34:45 compute-0 nova_compute[189491]: 2025-12-01 09:34:45.349 189495 DEBUG oslo_concurrency.processutils [None req-e4c0039a-edea-4954-8d74-256f313ab2c3 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3bf6c54845f5e9621e4fb27f7d70d848ea2fd366.converted --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:34:45 compute-0 nova_compute[189491]: 2025-12-01 09:34:45.350 189495 DEBUG oslo_concurrency.lockutils [None req-e4c0039a-edea-4954-8d74-256f313ab2c3 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lock "3bf6c54845f5e9621e4fb27f7d70d848ea2fd366" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 1.622s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:34:45 compute-0 nova_compute[189491]: 2025-12-01 09:34:45.365 189495 DEBUG oslo_concurrency.processutils [None req-e4c0039a-edea-4954-8d74-256f313ab2c3 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3bf6c54845f5e9621e4fb27f7d70d848ea2fd366 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:34:45 compute-0 nova_compute[189491]: 2025-12-01 09:34:45.423 189495 DEBUG oslo_concurrency.processutils [None req-e4c0039a-edea-4954-8d74-256f313ab2c3 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3bf6c54845f5e9621e4fb27f7d70d848ea2fd366 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:34:45 compute-0 nova_compute[189491]: 2025-12-01 09:34:45.424 189495 DEBUG oslo_concurrency.lockutils [None req-e4c0039a-edea-4954-8d74-256f313ab2c3 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Acquiring lock "3bf6c54845f5e9621e4fb27f7d70d848ea2fd366" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:34:45 compute-0 nova_compute[189491]: 2025-12-01 09:34:45.424 189495 DEBUG oslo_concurrency.lockutils [None req-e4c0039a-edea-4954-8d74-256f313ab2c3 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lock "3bf6c54845f5e9621e4fb27f7d70d848ea2fd366" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:34:45 compute-0 nova_compute[189491]: 2025-12-01 09:34:45.441 189495 DEBUG oslo_concurrency.processutils [None req-e4c0039a-edea-4954-8d74-256f313ab2c3 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3bf6c54845f5e9621e4fb27f7d70d848ea2fd366 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:34:45 compute-0 nova_compute[189491]: 2025-12-01 09:34:45.499 189495 DEBUG oslo_concurrency.processutils [None req-e4c0039a-edea-4954-8d74-256f313ab2c3 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3bf6c54845f5e9621e4fb27f7d70d848ea2fd366 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:34:45 compute-0 nova_compute[189491]: 2025-12-01 09:34:45.500 189495 DEBUG oslo_concurrency.processutils [None req-e4c0039a-edea-4954-8d74-256f313ab2c3 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/3bf6c54845f5e9621e4fb27f7d70d848ea2fd366,backing_fmt=raw /var/lib/nova/instances/fb95197c-0dde-4cf7-ace7-4d00e40f5d0f/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:34:45 compute-0 nova_compute[189491]: 2025-12-01 09:34:45.795 189495 DEBUG oslo_concurrency.processutils [None req-e4c0039a-edea-4954-8d74-256f313ab2c3 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/3bf6c54845f5e9621e4fb27f7d70d848ea2fd366,backing_fmt=raw /var/lib/nova/instances/fb95197c-0dde-4cf7-ace7-4d00e40f5d0f/disk 1073741824" returned: 0 in 0.295s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:34:45 compute-0 nova_compute[189491]: 2025-12-01 09:34:45.797 189495 DEBUG oslo_concurrency.lockutils [None req-e4c0039a-edea-4954-8d74-256f313ab2c3 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lock "3bf6c54845f5e9621e4fb27f7d70d848ea2fd366" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.373s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:34:45 compute-0 nova_compute[189491]: 2025-12-01 09:34:45.799 189495 DEBUG oslo_concurrency.processutils [None req-e4c0039a-edea-4954-8d74-256f313ab2c3 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3bf6c54845f5e9621e4fb27f7d70d848ea2fd366 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:34:45 compute-0 nova_compute[189491]: 2025-12-01 09:34:45.865 189495 DEBUG oslo_concurrency.processutils [None req-e4c0039a-edea-4954-8d74-256f313ab2c3 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3bf6c54845f5e9621e4fb27f7d70d848ea2fd366 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:34:45 compute-0 nova_compute[189491]: 2025-12-01 09:34:45.866 189495 DEBUG nova.virt.disk.api [None req-e4c0039a-edea-4954-8d74-256f313ab2c3 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Checking if we can resize image /var/lib/nova/instances/fb95197c-0dde-4cf7-ace7-4d00e40f5d0f/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Dec  1 09:34:45 compute-0 nova_compute[189491]: 2025-12-01 09:34:45.866 189495 DEBUG oslo_concurrency.processutils [None req-e4c0039a-edea-4954-8d74-256f313ab2c3 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fb95197c-0dde-4cf7-ace7-4d00e40f5d0f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:34:45 compute-0 nova_compute[189491]: 2025-12-01 09:34:45.930 189495 DEBUG oslo_concurrency.processutils [None req-e4c0039a-edea-4954-8d74-256f313ab2c3 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fb95197c-0dde-4cf7-ace7-4d00e40f5d0f/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:34:45 compute-0 nova_compute[189491]: 2025-12-01 09:34:45.931 189495 DEBUG nova.virt.disk.api [None req-e4c0039a-edea-4954-8d74-256f313ab2c3 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Cannot resize image /var/lib/nova/instances/fb95197c-0dde-4cf7-ace7-4d00e40f5d0f/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Dec  1 09:34:45 compute-0 nova_compute[189491]: 2025-12-01 09:34:45.931 189495 DEBUG nova.objects.instance [None req-e4c0039a-edea-4954-8d74-256f313ab2c3 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lazy-loading 'migration_context' on Instance uuid fb95197c-0dde-4cf7-ace7-4d00e40f5d0f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 09:34:45 compute-0 nova_compute[189491]: 2025-12-01 09:34:45.947 189495 DEBUG oslo_concurrency.lockutils [None req-e4c0039a-edea-4954-8d74-256f313ab2c3 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Acquiring lock "/var/lib/nova/instances/fb95197c-0dde-4cf7-ace7-4d00e40f5d0f/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:34:45 compute-0 nova_compute[189491]: 2025-12-01 09:34:45.948 189495 DEBUG oslo_concurrency.lockutils [None req-e4c0039a-edea-4954-8d74-256f313ab2c3 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lock "/var/lib/nova/instances/fb95197c-0dde-4cf7-ace7-4d00e40f5d0f/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:34:45 compute-0 nova_compute[189491]: 2025-12-01 09:34:45.948 189495 DEBUG oslo_concurrency.lockutils [None req-e4c0039a-edea-4954-8d74-256f313ab2c3 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lock "/var/lib/nova/instances/fb95197c-0dde-4cf7-ace7-4d00e40f5d0f/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:34:45 compute-0 nova_compute[189491]: 2025-12-01 09:34:45.964 189495 DEBUG oslo_concurrency.processutils [None req-e4c0039a-edea-4954-8d74-256f313ab2c3 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:34:46 compute-0 nova_compute[189491]: 2025-12-01 09:34:46.028 189495 DEBUG oslo_concurrency.processutils [None req-e4c0039a-edea-4954-8d74-256f313ab2c3 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:34:46 compute-0 nova_compute[189491]: 2025-12-01 09:34:46.029 189495 DEBUG oslo_concurrency.lockutils [None req-e4c0039a-edea-4954-8d74-256f313ab2c3 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:34:46 compute-0 nova_compute[189491]: 2025-12-01 09:34:46.029 189495 DEBUG oslo_concurrency.lockutils [None req-e4c0039a-edea-4954-8d74-256f313ab2c3 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:34:46 compute-0 nova_compute[189491]: 2025-12-01 09:34:46.041 189495 DEBUG oslo_concurrency.processutils [None req-e4c0039a-edea-4954-8d74-256f313ab2c3 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:34:46 compute-0 nova_compute[189491]: 2025-12-01 09:34:46.099 189495 DEBUG oslo_concurrency.processutils [None req-e4c0039a-edea-4954-8d74-256f313ab2c3 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:34:46 compute-0 nova_compute[189491]: 2025-12-01 09:34:46.100 189495 DEBUG oslo_concurrency.processutils [None req-e4c0039a-edea-4954-8d74-256f313ab2c3 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/fb95197c-0dde-4cf7-ace7-4d00e40f5d0f/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:34:46 compute-0 nova_compute[189491]: 2025-12-01 09:34:46.424 189495 DEBUG oslo_concurrency.processutils [None req-e4c0039a-edea-4954-8d74-256f313ab2c3 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/fb95197c-0dde-4cf7-ace7-4d00e40f5d0f/disk.eph0 1073741824" returned: 0 in 0.324s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:34:46 compute-0 nova_compute[189491]: 2025-12-01 09:34:46.426 189495 DEBUG oslo_concurrency.lockutils [None req-e4c0039a-edea-4954-8d74-256f313ab2c3 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.397s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:34:46 compute-0 nova_compute[189491]: 2025-12-01 09:34:46.427 189495 DEBUG oslo_concurrency.processutils [None req-e4c0039a-edea-4954-8d74-256f313ab2c3 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:34:46 compute-0 nova_compute[189491]: 2025-12-01 09:34:46.492 189495 DEBUG oslo_concurrency.processutils [None req-e4c0039a-edea-4954-8d74-256f313ab2c3 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:34:46 compute-0 nova_compute[189491]: 2025-12-01 09:34:46.493 189495 DEBUG nova.virt.libvirt.driver [None req-e4c0039a-edea-4954-8d74-256f313ab2c3 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: fb95197c-0dde-4cf7-ace7-4d00e40f5d0f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  1 09:34:46 compute-0 nova_compute[189491]: 2025-12-01 09:34:46.494 189495 DEBUG nova.virt.libvirt.driver [None req-e4c0039a-edea-4954-8d74-256f313ab2c3 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: fb95197c-0dde-4cf7-ace7-4d00e40f5d0f] Ensure instance console log exists: /var/lib/nova/instances/fb95197c-0dde-4cf7-ace7-4d00e40f5d0f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  1 09:34:46 compute-0 nova_compute[189491]: 2025-12-01 09:34:46.494 189495 DEBUG oslo_concurrency.lockutils [None req-e4c0039a-edea-4954-8d74-256f313ab2c3 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:34:46 compute-0 nova_compute[189491]: 2025-12-01 09:34:46.495 189495 DEBUG oslo_concurrency.lockutils [None req-e4c0039a-edea-4954-8d74-256f313ab2c3 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:34:46 compute-0 nova_compute[189491]: 2025-12-01 09:34:46.495 189495 DEBUG oslo_concurrency.lockutils [None req-e4c0039a-edea-4954-8d74-256f313ab2c3 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:34:46 compute-0 nova_compute[189491]: 2025-12-01 09:34:46.497 189495 DEBUG nova.virt.libvirt.driver [None req-e4c0039a-edea-4954-8d74-256f313ab2c3 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: fb95197c-0dde-4cf7-ace7-4d00e40f5d0f] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-01T09:34:29Z,direct_url=<?>,disk_format='qcow2',id=3543bf4f-9e23-4a08-9641-acb14b5c984b,min_disk=0,min_ram=0,name='fvt_testing_image',owner='fac95b8a995a4174bfa966a8d9d9aa01',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-01T09:34:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'guest_format': None, 'encryption_options': None, 'device_type': 'disk', 'encryption_secret_uuid': None, 'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'image_id': '3543bf4f-9e23-4a08-9641-acb14b5c984b'}], 'ephemerals': [{'size': 1, 'encrypted': False, 'guest_format': None, 'encryption_options': None, 'device_type': 'disk', 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'device_name': '/dev/vdb', 'encryption_format': None}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  1 09:34:46 compute-0 nova_compute[189491]: 2025-12-01 09:34:46.504 189495 WARNING nova.virt.libvirt.driver [None req-e4c0039a-edea-4954-8d74-256f313ab2c3 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 09:34:46 compute-0 nova_compute[189491]: 2025-12-01 09:34:46.510 189495 DEBUG nova.virt.libvirt.host [None req-e4c0039a-edea-4954-8d74-256f313ab2c3 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  1 09:34:46 compute-0 nova_compute[189491]: 2025-12-01 09:34:46.511 189495 DEBUG nova.virt.libvirt.host [None req-e4c0039a-edea-4954-8d74-256f313ab2c3 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  1 09:34:46 compute-0 nova_compute[189491]: 2025-12-01 09:34:46.515 189495 DEBUG nova.virt.libvirt.host [None req-e4c0039a-edea-4954-8d74-256f313ab2c3 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  1 09:34:46 compute-0 nova_compute[189491]: 2025-12-01 09:34:46.516 189495 DEBUG nova.virt.libvirt.host [None req-e4c0039a-edea-4954-8d74-256f313ab2c3 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  1 09:34:46 compute-0 nova_compute[189491]: 2025-12-01 09:34:46.516 189495 DEBUG nova.virt.libvirt.driver [None req-e4c0039a-edea-4954-8d74-256f313ab2c3 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  1 09:34:46 compute-0 nova_compute[189491]: 2025-12-01 09:34:46.517 189495 DEBUG nova.virt.hardware [None req-e4c0039a-edea-4954-8d74-256f313ab2c3 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-01T09:34:37Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='c630b2e6-1a0c-4849-a37e-89370d979c93',id=2,is_public=True,memory_mb=512,name='fvt_testing_flavor',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-01T09:34:29Z,direct_url=<?>,disk_format='qcow2',id=3543bf4f-9e23-4a08-9641-acb14b5c984b,min_disk=0,min_ram=0,name='fvt_testing_image',owner='fac95b8a995a4174bfa966a8d9d9aa01',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-01T09:34:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  1 09:34:46 compute-0 nova_compute[189491]: 2025-12-01 09:34:46.518 189495 DEBUG nova.virt.hardware [None req-e4c0039a-edea-4954-8d74-256f313ab2c3 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  1 09:34:46 compute-0 nova_compute[189491]: 2025-12-01 09:34:46.518 189495 DEBUG nova.virt.hardware [None req-e4c0039a-edea-4954-8d74-256f313ab2c3 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  1 09:34:46 compute-0 nova_compute[189491]: 2025-12-01 09:34:46.519 189495 DEBUG nova.virt.hardware [None req-e4c0039a-edea-4954-8d74-256f313ab2c3 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  1 09:34:46 compute-0 nova_compute[189491]: 2025-12-01 09:34:46.519 189495 DEBUG nova.virt.hardware [None req-e4c0039a-edea-4954-8d74-256f313ab2c3 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  1 09:34:46 compute-0 nova_compute[189491]: 2025-12-01 09:34:46.519 189495 DEBUG nova.virt.hardware [None req-e4c0039a-edea-4954-8d74-256f313ab2c3 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  1 09:34:46 compute-0 nova_compute[189491]: 2025-12-01 09:34:46.520 189495 DEBUG nova.virt.hardware [None req-e4c0039a-edea-4954-8d74-256f313ab2c3 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  1 09:34:46 compute-0 nova_compute[189491]: 2025-12-01 09:34:46.521 189495 DEBUG nova.virt.hardware [None req-e4c0039a-edea-4954-8d74-256f313ab2c3 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  1 09:34:46 compute-0 nova_compute[189491]: 2025-12-01 09:34:46.521 189495 DEBUG nova.virt.hardware [None req-e4c0039a-edea-4954-8d74-256f313ab2c3 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  1 09:34:46 compute-0 nova_compute[189491]: 2025-12-01 09:34:46.522 189495 DEBUG nova.virt.hardware [None req-e4c0039a-edea-4954-8d74-256f313ab2c3 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  1 09:34:46 compute-0 nova_compute[189491]: 2025-12-01 09:34:46.522 189495 DEBUG nova.virt.hardware [None req-e4c0039a-edea-4954-8d74-256f313ab2c3 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  1 09:34:46 compute-0 nova_compute[189491]: 2025-12-01 09:34:46.528 189495 DEBUG nova.objects.instance [None req-e4c0039a-edea-4954-8d74-256f313ab2c3 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lazy-loading 'pci_devices' on Instance uuid fb95197c-0dde-4cf7-ace7-4d00e40f5d0f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 09:34:46 compute-0 nova_compute[189491]: 2025-12-01 09:34:46.544 189495 DEBUG nova.virt.libvirt.driver [None req-e4c0039a-edea-4954-8d74-256f313ab2c3 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: fb95197c-0dde-4cf7-ace7-4d00e40f5d0f] End _get_guest_xml xml=<domain type="kvm">
Dec  1 09:34:46 compute-0 nova_compute[189491]:  <uuid>fb95197c-0dde-4cf7-ace7-4d00e40f5d0f</uuid>
Dec  1 09:34:46 compute-0 nova_compute[189491]:  <name>instance-00000005</name>
Dec  1 09:34:46 compute-0 nova_compute[189491]:  <memory>524288</memory>
Dec  1 09:34:46 compute-0 nova_compute[189491]:  <vcpu>1</vcpu>
Dec  1 09:34:46 compute-0 nova_compute[189491]:  <metadata>
Dec  1 09:34:46 compute-0 nova_compute[189491]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  1 09:34:46 compute-0 nova_compute[189491]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  1 09:34:46 compute-0 nova_compute[189491]:      <nova:name>fvt_testing_server</nova:name>
Dec  1 09:34:46 compute-0 nova_compute[189491]:      <nova:creationTime>2025-12-01 09:34:46</nova:creationTime>
Dec  1 09:34:46 compute-0 nova_compute[189491]:      <nova:flavor name="fvt_testing_flavor">
Dec  1 09:34:46 compute-0 nova_compute[189491]:        <nova:memory>512</nova:memory>
Dec  1 09:34:46 compute-0 nova_compute[189491]:        <nova:disk>1</nova:disk>
Dec  1 09:34:46 compute-0 nova_compute[189491]:        <nova:swap>0</nova:swap>
Dec  1 09:34:46 compute-0 nova_compute[189491]:        <nova:ephemeral>1</nova:ephemeral>
Dec  1 09:34:46 compute-0 nova_compute[189491]:        <nova:vcpus>1</nova:vcpus>
Dec  1 09:34:46 compute-0 nova_compute[189491]:      </nova:flavor>
Dec  1 09:34:46 compute-0 nova_compute[189491]:      <nova:owner>
Dec  1 09:34:46 compute-0 nova_compute[189491]:        <nova:user uuid="962a55152ff34fdda5eae1f8aee3a7a9">admin</nova:user>
Dec  1 09:34:46 compute-0 nova_compute[189491]:        <nova:project uuid="fac95b8a995a4174bfa966a8d9d9aa01">admin</nova:project>
Dec  1 09:34:46 compute-0 nova_compute[189491]:      </nova:owner>
Dec  1 09:34:46 compute-0 nova_compute[189491]:      <nova:root type="image" uuid="3543bf4f-9e23-4a08-9641-acb14b5c984b"/>
Dec  1 09:34:46 compute-0 nova_compute[189491]:      <nova:ports/>
Dec  1 09:34:46 compute-0 nova_compute[189491]:    </nova:instance>
Dec  1 09:34:46 compute-0 nova_compute[189491]:  </metadata>
Dec  1 09:34:46 compute-0 nova_compute[189491]:  <sysinfo type="smbios">
Dec  1 09:34:46 compute-0 nova_compute[189491]:    <system>
Dec  1 09:34:46 compute-0 nova_compute[189491]:      <entry name="manufacturer">RDO</entry>
Dec  1 09:34:46 compute-0 nova_compute[189491]:      <entry name="product">OpenStack Compute</entry>
Dec  1 09:34:46 compute-0 nova_compute[189491]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  1 09:34:46 compute-0 nova_compute[189491]:      <entry name="serial">fb95197c-0dde-4cf7-ace7-4d00e40f5d0f</entry>
Dec  1 09:34:46 compute-0 nova_compute[189491]:      <entry name="uuid">fb95197c-0dde-4cf7-ace7-4d00e40f5d0f</entry>
Dec  1 09:34:46 compute-0 nova_compute[189491]:      <entry name="family">Virtual Machine</entry>
Dec  1 09:34:46 compute-0 nova_compute[189491]:    </system>
Dec  1 09:34:46 compute-0 nova_compute[189491]:  </sysinfo>
Dec  1 09:34:46 compute-0 nova_compute[189491]:  <os>
Dec  1 09:34:46 compute-0 nova_compute[189491]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  1 09:34:46 compute-0 nova_compute[189491]:    <boot dev="hd"/>
Dec  1 09:34:46 compute-0 nova_compute[189491]:    <smbios mode="sysinfo"/>
Dec  1 09:34:46 compute-0 nova_compute[189491]:  </os>
Dec  1 09:34:46 compute-0 nova_compute[189491]:  <features>
Dec  1 09:34:46 compute-0 nova_compute[189491]:    <acpi/>
Dec  1 09:34:46 compute-0 nova_compute[189491]:    <apic/>
Dec  1 09:34:46 compute-0 nova_compute[189491]:    <vmcoreinfo/>
Dec  1 09:34:46 compute-0 nova_compute[189491]:  </features>
Dec  1 09:34:46 compute-0 nova_compute[189491]:  <clock offset="utc">
Dec  1 09:34:46 compute-0 nova_compute[189491]:    <timer name="pit" tickpolicy="delay"/>
Dec  1 09:34:46 compute-0 nova_compute[189491]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  1 09:34:46 compute-0 nova_compute[189491]:    <timer name="hpet" present="no"/>
Dec  1 09:34:46 compute-0 nova_compute[189491]:  </clock>
Dec  1 09:34:46 compute-0 nova_compute[189491]:  <cpu mode="host-model" match="exact">
Dec  1 09:34:46 compute-0 nova_compute[189491]:    <topology sockets="1" cores="1" threads="1"/>
Dec  1 09:34:46 compute-0 nova_compute[189491]:  </cpu>
Dec  1 09:34:46 compute-0 nova_compute[189491]:  <devices>
Dec  1 09:34:46 compute-0 nova_compute[189491]:    <disk type="file" device="disk">
Dec  1 09:34:46 compute-0 nova_compute[189491]:      <driver name="qemu" type="qcow2" cache="none"/>
Dec  1 09:34:46 compute-0 nova_compute[189491]:      <source file="/var/lib/nova/instances/fb95197c-0dde-4cf7-ace7-4d00e40f5d0f/disk"/>
Dec  1 09:34:46 compute-0 nova_compute[189491]:      <target dev="vda" bus="virtio"/>
Dec  1 09:34:46 compute-0 nova_compute[189491]:    </disk>
Dec  1 09:34:46 compute-0 nova_compute[189491]:    <disk type="file" device="disk">
Dec  1 09:34:46 compute-0 nova_compute[189491]:      <driver name="qemu" type="qcow2" cache="none"/>
Dec  1 09:34:46 compute-0 nova_compute[189491]:      <source file="/var/lib/nova/instances/fb95197c-0dde-4cf7-ace7-4d00e40f5d0f/disk.eph0"/>
Dec  1 09:34:46 compute-0 nova_compute[189491]:      <target dev="vdb" bus="virtio"/>
Dec  1 09:34:46 compute-0 nova_compute[189491]:    </disk>
Dec  1 09:34:46 compute-0 nova_compute[189491]:    <disk type="file" device="cdrom">
Dec  1 09:34:46 compute-0 nova_compute[189491]:      <driver name="qemu" type="raw" cache="none"/>
Dec  1 09:34:46 compute-0 nova_compute[189491]:      <source file="/var/lib/nova/instances/fb95197c-0dde-4cf7-ace7-4d00e40f5d0f/disk.config"/>
Dec  1 09:34:46 compute-0 nova_compute[189491]:      <target dev="sda" bus="sata"/>
Dec  1 09:34:46 compute-0 nova_compute[189491]:    </disk>
Dec  1 09:34:46 compute-0 nova_compute[189491]:    <serial type="pty">
Dec  1 09:34:46 compute-0 nova_compute[189491]:      <log file="/var/lib/nova/instances/fb95197c-0dde-4cf7-ace7-4d00e40f5d0f/console.log" append="off"/>
Dec  1 09:34:46 compute-0 nova_compute[189491]:    </serial>
Dec  1 09:34:46 compute-0 nova_compute[189491]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  1 09:34:46 compute-0 nova_compute[189491]:    <video>
Dec  1 09:34:46 compute-0 nova_compute[189491]:      <model type="virtio"/>
Dec  1 09:34:46 compute-0 nova_compute[189491]:    </video>
Dec  1 09:34:46 compute-0 nova_compute[189491]:    <input type="tablet" bus="usb"/>
Dec  1 09:34:46 compute-0 nova_compute[189491]:    <rng model="virtio">
Dec  1 09:34:46 compute-0 nova_compute[189491]:      <backend model="random">/dev/urandom</backend>
Dec  1 09:34:46 compute-0 nova_compute[189491]:    </rng>
Dec  1 09:34:46 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root"/>
Dec  1 09:34:46 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:34:46 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:34:46 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:34:46 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:34:46 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:34:46 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:34:46 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:34:46 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:34:46 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:34:46 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:34:46 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:34:46 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:34:46 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:34:46 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:34:46 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:34:46 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:34:46 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:34:46 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:34:46 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:34:46 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:34:46 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:34:46 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:34:46 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:34:46 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:34:46 compute-0 nova_compute[189491]:    <controller type="usb" index="0"/>
Dec  1 09:34:46 compute-0 nova_compute[189491]:    <memballoon model="virtio">
Dec  1 09:34:46 compute-0 nova_compute[189491]:      <stats period="10"/>
Dec  1 09:34:46 compute-0 nova_compute[189491]:    </memballoon>
Dec  1 09:34:46 compute-0 nova_compute[189491]:  </devices>
Dec  1 09:34:46 compute-0 nova_compute[189491]: </domain>
Dec  1 09:34:46 compute-0 nova_compute[189491]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  1 09:34:46 compute-0 nova_compute[189491]: 2025-12-01 09:34:46.602 189495 DEBUG nova.virt.libvirt.driver [None req-e4c0039a-edea-4954-8d74-256f313ab2c3 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  1 09:34:46 compute-0 nova_compute[189491]: 2025-12-01 09:34:46.602 189495 DEBUG nova.virt.libvirt.driver [None req-e4c0039a-edea-4954-8d74-256f313ab2c3 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  1 09:34:46 compute-0 nova_compute[189491]: 2025-12-01 09:34:46.602 189495 DEBUG nova.virt.libvirt.driver [None req-e4c0039a-edea-4954-8d74-256f313ab2c3 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  1 09:34:46 compute-0 nova_compute[189491]: 2025-12-01 09:34:46.603 189495 INFO nova.virt.libvirt.driver [None req-e4c0039a-edea-4954-8d74-256f313ab2c3 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: fb95197c-0dde-4cf7-ace7-4d00e40f5d0f] Using config drive#033[00m
Dec  1 09:34:46 compute-0 nova_compute[189491]: 2025-12-01 09:34:46.957 189495 INFO nova.virt.libvirt.driver [None req-e4c0039a-edea-4954-8d74-256f313ab2c3 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: fb95197c-0dde-4cf7-ace7-4d00e40f5d0f] Creating config drive at /var/lib/nova/instances/fb95197c-0dde-4cf7-ace7-4d00e40f5d0f/disk.config#033[00m
Dec  1 09:34:46 compute-0 nova_compute[189491]: 2025-12-01 09:34:46.963 189495 DEBUG oslo_concurrency.processutils [None req-e4c0039a-edea-4954-8d74-256f313ab2c3 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/fb95197c-0dde-4cf7-ace7-4d00e40f5d0f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbpq05zgm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:34:47 compute-0 nova_compute[189491]: 2025-12-01 09:34:47.093 189495 DEBUG oslo_concurrency.processutils [None req-e4c0039a-edea-4954-8d74-256f313ab2c3 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/fb95197c-0dde-4cf7-ace7-4d00e40f5d0f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbpq05zgm" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:34:47 compute-0 systemd-machined[155812]: New machine qemu-5-instance-00000005.
Dec  1 09:34:47 compute-0 systemd[1]: Started Virtual Machine qemu-5-instance-00000005.
Dec  1 09:34:47 compute-0 nova_compute[189491]: 2025-12-01 09:34:47.502 189495 DEBUG nova.virt.driver [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] Emitting event <LifecycleEvent: 1764581687.4995973, fb95197c-0dde-4cf7-ace7-4d00e40f5d0f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 09:34:47 compute-0 nova_compute[189491]: 2025-12-01 09:34:47.504 189495 INFO nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: fb95197c-0dde-4cf7-ace7-4d00e40f5d0f] VM Resumed (Lifecycle Event)#033[00m
Dec  1 09:34:47 compute-0 nova_compute[189491]: 2025-12-01 09:34:47.506 189495 DEBUG nova.compute.manager [None req-e4c0039a-edea-4954-8d74-256f313ab2c3 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: fb95197c-0dde-4cf7-ace7-4d00e40f5d0f] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  1 09:34:47 compute-0 nova_compute[189491]: 2025-12-01 09:34:47.507 189495 DEBUG nova.virt.libvirt.driver [None req-e4c0039a-edea-4954-8d74-256f313ab2c3 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: fb95197c-0dde-4cf7-ace7-4d00e40f5d0f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  1 09:34:47 compute-0 nova_compute[189491]: 2025-12-01 09:34:47.511 189495 INFO nova.virt.libvirt.driver [-] [instance: fb95197c-0dde-4cf7-ace7-4d00e40f5d0f] Instance spawned successfully.#033[00m
Dec  1 09:34:47 compute-0 nova_compute[189491]: 2025-12-01 09:34:47.512 189495 DEBUG nova.virt.libvirt.driver [None req-e4c0039a-edea-4954-8d74-256f313ab2c3 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: fb95197c-0dde-4cf7-ace7-4d00e40f5d0f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  1 09:34:47 compute-0 nova_compute[189491]: 2025-12-01 09:34:47.530 189495 DEBUG nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: fb95197c-0dde-4cf7-ace7-4d00e40f5d0f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 09:34:47 compute-0 nova_compute[189491]: 2025-12-01 09:34:47.537 189495 DEBUG nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: fb95197c-0dde-4cf7-ace7-4d00e40f5d0f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  1 09:34:47 compute-0 nova_compute[189491]: 2025-12-01 09:34:47.542 189495 DEBUG nova.virt.libvirt.driver [None req-e4c0039a-edea-4954-8d74-256f313ab2c3 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: fb95197c-0dde-4cf7-ace7-4d00e40f5d0f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 09:34:47 compute-0 nova_compute[189491]: 2025-12-01 09:34:47.543 189495 DEBUG nova.virt.libvirt.driver [None req-e4c0039a-edea-4954-8d74-256f313ab2c3 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: fb95197c-0dde-4cf7-ace7-4d00e40f5d0f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 09:34:47 compute-0 nova_compute[189491]: 2025-12-01 09:34:47.543 189495 DEBUG nova.virt.libvirt.driver [None req-e4c0039a-edea-4954-8d74-256f313ab2c3 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: fb95197c-0dde-4cf7-ace7-4d00e40f5d0f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 09:34:47 compute-0 nova_compute[189491]: 2025-12-01 09:34:47.544 189495 DEBUG nova.virt.libvirt.driver [None req-e4c0039a-edea-4954-8d74-256f313ab2c3 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: fb95197c-0dde-4cf7-ace7-4d00e40f5d0f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 09:34:47 compute-0 nova_compute[189491]: 2025-12-01 09:34:47.544 189495 DEBUG nova.virt.libvirt.driver [None req-e4c0039a-edea-4954-8d74-256f313ab2c3 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: fb95197c-0dde-4cf7-ace7-4d00e40f5d0f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 09:34:47 compute-0 nova_compute[189491]: 2025-12-01 09:34:47.545 189495 DEBUG nova.virt.libvirt.driver [None req-e4c0039a-edea-4954-8d74-256f313ab2c3 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: fb95197c-0dde-4cf7-ace7-4d00e40f5d0f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 09:34:47 compute-0 nova_compute[189491]: 2025-12-01 09:34:47.568 189495 INFO nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: fb95197c-0dde-4cf7-ace7-4d00e40f5d0f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  1 09:34:47 compute-0 nova_compute[189491]: 2025-12-01 09:34:47.569 189495 DEBUG nova.virt.driver [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] Emitting event <LifecycleEvent: 1764581687.503481, fb95197c-0dde-4cf7-ace7-4d00e40f5d0f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 09:34:47 compute-0 nova_compute[189491]: 2025-12-01 09:34:47.569 189495 INFO nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: fb95197c-0dde-4cf7-ace7-4d00e40f5d0f] VM Started (Lifecycle Event)#033[00m
Dec  1 09:34:47 compute-0 nova_compute[189491]: 2025-12-01 09:34:47.600 189495 DEBUG nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: fb95197c-0dde-4cf7-ace7-4d00e40f5d0f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 09:34:47 compute-0 nova_compute[189491]: 2025-12-01 09:34:47.606 189495 DEBUG nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: fb95197c-0dde-4cf7-ace7-4d00e40f5d0f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  1 09:34:47 compute-0 nova_compute[189491]: 2025-12-01 09:34:47.609 189495 INFO nova.compute.manager [None req-e4c0039a-edea-4954-8d74-256f313ab2c3 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: fb95197c-0dde-4cf7-ace7-4d00e40f5d0f] Took 3.89 seconds to spawn the instance on the hypervisor.#033[00m
Dec  1 09:34:47 compute-0 nova_compute[189491]: 2025-12-01 09:34:47.610 189495 DEBUG nova.compute.manager [None req-e4c0039a-edea-4954-8d74-256f313ab2c3 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: fb95197c-0dde-4cf7-ace7-4d00e40f5d0f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 09:34:47 compute-0 nova_compute[189491]: 2025-12-01 09:34:47.632 189495 INFO nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: fb95197c-0dde-4cf7-ace7-4d00e40f5d0f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  1 09:34:47 compute-0 nova_compute[189491]: 2025-12-01 09:34:47.657 189495 INFO nova.compute.manager [None req-e4c0039a-edea-4954-8d74-256f313ab2c3 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: fb95197c-0dde-4cf7-ace7-4d00e40f5d0f] Took 4.62 seconds to build instance.#033[00m
Dec  1 09:34:47 compute-0 nova_compute[189491]: 2025-12-01 09:34:47.680 189495 DEBUG oslo_concurrency.lockutils [None req-e4c0039a-edea-4954-8d74-256f313ab2c3 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lock "fb95197c-0dde-4cf7-ace7-4d00e40f5d0f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.698s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:34:47 compute-0 nova_compute[189491]: 2025-12-01 09:34:47.800 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:34:47 compute-0 nova_compute[189491]: 2025-12-01 09:34:47.972 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:34:48 compute-0 systemd[1]: Starting libvirt proxy daemon...
Dec  1 09:34:48 compute-0 systemd[1]: Started libvirt proxy daemon.
Dec  1 09:34:49 compute-0 podman[247833]: 2025-12-01 09:34:49.718751949 +0000 UTC m=+0.089332035 container health_status 110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, managed_by=edpm_ansible, release=1755695350, io.buildah.version=1.33.7, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, architecture=x86_64, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  1 09:34:49 compute-0 podman[247834]: 2025-12-01 09:34:49.742332891 +0000 UTC m=+0.110020497 container health_status f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec  1 09:34:52 compute-0 nova_compute[189491]: 2025-12-01 09:34:52.801 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:34:52 compute-0 nova_compute[189491]: 2025-12-01 09:34:52.974 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:34:53 compute-0 podman[247868]: 2025-12-01 09:34:53.723653339 +0000 UTC m=+0.085552704 container health_status 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 09:34:53 compute-0 podman[247869]: 2025-12-01 09:34:53.766478887 +0000 UTC m=+0.129035067 container health_status 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3)
Dec  1 09:34:57 compute-0 nova_compute[189491]: 2025-12-01 09:34:57.803 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:34:57 compute-0 nova_compute[189491]: 2025-12-01 09:34:57.976 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:34:59 compute-0 podman[203700]: time="2025-12-01T09:34:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 09:34:59 compute-0 podman[203700]: @ - - [01/Dec/2025:09:34:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Dec  1 09:34:59 compute-0 podman[203700]: @ - - [01/Dec/2025:09:34:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4817 "" "Go-http-client/1.1"
Dec  1 09:35:01 compute-0 openstack_network_exporter[205866]: ERROR   09:35:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:35:01 compute-0 openstack_network_exporter[205866]: ERROR   09:35:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:35:01 compute-0 openstack_network_exporter[205866]: ERROR   09:35:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 09:35:01 compute-0 openstack_network_exporter[205866]: ERROR   09:35:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 09:35:01 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:35:01 compute-0 openstack_network_exporter[205866]: ERROR   09:35:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 09:35:01 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:35:02 compute-0 nova_compute[189491]: 2025-12-01 09:35:02.805 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:35:02 compute-0 nova_compute[189491]: 2025-12-01 09:35:02.979 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:35:04 compute-0 podman[247912]: 2025-12-01 09:35:04.683592906 +0000 UTC m=+0.060430125 container health_status 6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  1 09:35:04 compute-0 podman[247913]: 2025-12-01 09:35:04.721509665 +0000 UTC m=+0.095513525 container health_status ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec  1 09:35:04 compute-0 nova_compute[189491]: 2025-12-01 09:35:04.826 189495 DEBUG oslo_concurrency.lockutils [None req-8f7e1c64-546d-4dcd-ab85-afb59b1be557 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Acquiring lock "fb95197c-0dde-4cf7-ace7-4d00e40f5d0f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:35:04 compute-0 nova_compute[189491]: 2025-12-01 09:35:04.826 189495 DEBUG oslo_concurrency.lockutils [None req-8f7e1c64-546d-4dcd-ab85-afb59b1be557 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lock "fb95197c-0dde-4cf7-ace7-4d00e40f5d0f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:35:04 compute-0 nova_compute[189491]: 2025-12-01 09:35:04.826 189495 DEBUG oslo_concurrency.lockutils [None req-8f7e1c64-546d-4dcd-ab85-afb59b1be557 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Acquiring lock "fb95197c-0dde-4cf7-ace7-4d00e40f5d0f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:35:04 compute-0 nova_compute[189491]: 2025-12-01 09:35:04.827 189495 DEBUG oslo_concurrency.lockutils [None req-8f7e1c64-546d-4dcd-ab85-afb59b1be557 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lock "fb95197c-0dde-4cf7-ace7-4d00e40f5d0f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:35:04 compute-0 nova_compute[189491]: 2025-12-01 09:35:04.827 189495 DEBUG oslo_concurrency.lockutils [None req-8f7e1c64-546d-4dcd-ab85-afb59b1be557 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lock "fb95197c-0dde-4cf7-ace7-4d00e40f5d0f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:35:04 compute-0 nova_compute[189491]: 2025-12-01 09:35:04.828 189495 INFO nova.compute.manager [None req-8f7e1c64-546d-4dcd-ab85-afb59b1be557 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: fb95197c-0dde-4cf7-ace7-4d00e40f5d0f] Terminating instance#033[00m
Dec  1 09:35:04 compute-0 nova_compute[189491]: 2025-12-01 09:35:04.828 189495 DEBUG oslo_concurrency.lockutils [None req-8f7e1c64-546d-4dcd-ab85-afb59b1be557 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Acquiring lock "refresh_cache-fb95197c-0dde-4cf7-ace7-4d00e40f5d0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 09:35:04 compute-0 nova_compute[189491]: 2025-12-01 09:35:04.829 189495 DEBUG oslo_concurrency.lockutils [None req-8f7e1c64-546d-4dcd-ab85-afb59b1be557 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Acquired lock "refresh_cache-fb95197c-0dde-4cf7-ace7-4d00e40f5d0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 09:35:04 compute-0 nova_compute[189491]: 2025-12-01 09:35:04.829 189495 DEBUG nova.network.neutron [None req-8f7e1c64-546d-4dcd-ab85-afb59b1be557 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: fb95197c-0dde-4cf7-ace7-4d00e40f5d0f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  1 09:35:05 compute-0 nova_compute[189491]: 2025-12-01 09:35:05.754 189495 DEBUG nova.network.neutron [None req-8f7e1c64-546d-4dcd-ab85-afb59b1be557 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: fb95197c-0dde-4cf7-ace7-4d00e40f5d0f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  1 09:35:07 compute-0 nova_compute[189491]: 2025-12-01 09:35:07.027 189495 DEBUG nova.network.neutron [None req-8f7e1c64-546d-4dcd-ab85-afb59b1be557 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: fb95197c-0dde-4cf7-ace7-4d00e40f5d0f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 09:35:07 compute-0 nova_compute[189491]: 2025-12-01 09:35:07.045 189495 DEBUG oslo_concurrency.lockutils [None req-8f7e1c64-546d-4dcd-ab85-afb59b1be557 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Releasing lock "refresh_cache-fb95197c-0dde-4cf7-ace7-4d00e40f5d0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 09:35:07 compute-0 nova_compute[189491]: 2025-12-01 09:35:07.046 189495 DEBUG nova.compute.manager [None req-8f7e1c64-546d-4dcd-ab85-afb59b1be557 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: fb95197c-0dde-4cf7-ace7-4d00e40f5d0f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  1 09:35:07 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Deactivated successfully.
Dec  1 09:35:07 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Consumed 20.033s CPU time.
Dec  1 09:35:07 compute-0 systemd-machined[155812]: Machine qemu-5-instance-00000005 terminated.
Dec  1 09:35:07 compute-0 nova_compute[189491]: 2025-12-01 09:35:07.310 189495 INFO nova.virt.libvirt.driver [-] [instance: fb95197c-0dde-4cf7-ace7-4d00e40f5d0f] Instance destroyed successfully.#033[00m
Dec  1 09:35:07 compute-0 nova_compute[189491]: 2025-12-01 09:35:07.311 189495 DEBUG nova.objects.instance [None req-8f7e1c64-546d-4dcd-ab85-afb59b1be557 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lazy-loading 'resources' on Instance uuid fb95197c-0dde-4cf7-ace7-4d00e40f5d0f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 09:35:07 compute-0 nova_compute[189491]: 2025-12-01 09:35:07.324 189495 INFO nova.virt.libvirt.driver [None req-8f7e1c64-546d-4dcd-ab85-afb59b1be557 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: fb95197c-0dde-4cf7-ace7-4d00e40f5d0f] Deleting instance files /var/lib/nova/instances/fb95197c-0dde-4cf7-ace7-4d00e40f5d0f_del#033[00m
Dec  1 09:35:07 compute-0 nova_compute[189491]: 2025-12-01 09:35:07.326 189495 INFO nova.virt.libvirt.driver [None req-8f7e1c64-546d-4dcd-ab85-afb59b1be557 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: fb95197c-0dde-4cf7-ace7-4d00e40f5d0f] Deletion of /var/lib/nova/instances/fb95197c-0dde-4cf7-ace7-4d00e40f5d0f_del complete#033[00m
Dec  1 09:35:07 compute-0 nova_compute[189491]: 2025-12-01 09:35:07.373 189495 INFO nova.compute.manager [None req-8f7e1c64-546d-4dcd-ab85-afb59b1be557 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: fb95197c-0dde-4cf7-ace7-4d00e40f5d0f] Took 0.33 seconds to destroy the instance on the hypervisor.#033[00m
Dec  1 09:35:07 compute-0 nova_compute[189491]: 2025-12-01 09:35:07.374 189495 DEBUG oslo.service.loopingcall [None req-8f7e1c64-546d-4dcd-ab85-afb59b1be557 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  1 09:35:07 compute-0 nova_compute[189491]: 2025-12-01 09:35:07.374 189495 DEBUG nova.compute.manager [-] [instance: fb95197c-0dde-4cf7-ace7-4d00e40f5d0f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  1 09:35:07 compute-0 nova_compute[189491]: 2025-12-01 09:35:07.374 189495 DEBUG nova.network.neutron [-] [instance: fb95197c-0dde-4cf7-ace7-4d00e40f5d0f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  1 09:35:07 compute-0 nova_compute[189491]: 2025-12-01 09:35:07.808 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:35:07 compute-0 nova_compute[189491]: 2025-12-01 09:35:07.836 189495 DEBUG nova.network.neutron [-] [instance: fb95197c-0dde-4cf7-ace7-4d00e40f5d0f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  1 09:35:07 compute-0 nova_compute[189491]: 2025-12-01 09:35:07.850 189495 DEBUG nova.network.neutron [-] [instance: fb95197c-0dde-4cf7-ace7-4d00e40f5d0f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 09:35:07 compute-0 nova_compute[189491]: 2025-12-01 09:35:07.864 189495 INFO nova.compute.manager [-] [instance: fb95197c-0dde-4cf7-ace7-4d00e40f5d0f] Took 0.49 seconds to deallocate network for instance.#033[00m
Dec  1 09:35:07 compute-0 nova_compute[189491]: 2025-12-01 09:35:07.915 189495 DEBUG oslo_concurrency.lockutils [None req-8f7e1c64-546d-4dcd-ab85-afb59b1be557 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:35:07 compute-0 nova_compute[189491]: 2025-12-01 09:35:07.916 189495 DEBUG oslo_concurrency.lockutils [None req-8f7e1c64-546d-4dcd-ab85-afb59b1be557 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:35:07 compute-0 nova_compute[189491]: 2025-12-01 09:35:07.981 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:35:08 compute-0 nova_compute[189491]: 2025-12-01 09:35:08.012 189495 DEBUG nova.compute.provider_tree [None req-8f7e1c64-546d-4dcd-ab85-afb59b1be557 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Inventory has not changed in ProviderTree for provider: 143c7fe7-af1f-477a-978c-6a994d785d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 09:35:08 compute-0 nova_compute[189491]: 2025-12-01 09:35:08.027 189495 DEBUG nova.scheduler.client.report [None req-8f7e1c64-546d-4dcd-ab85-afb59b1be557 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Inventory has not changed for provider 143c7fe7-af1f-477a-978c-6a994d785d98 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 09:35:08 compute-0 nova_compute[189491]: 2025-12-01 09:35:08.049 189495 DEBUG oslo_concurrency.lockutils [None req-8f7e1c64-546d-4dcd-ab85-afb59b1be557 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.134s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:35:08 compute-0 nova_compute[189491]: 2025-12-01 09:35:08.281 189495 INFO nova.scheduler.client.report [None req-8f7e1c64-546d-4dcd-ab85-afb59b1be557 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Deleted allocations for instance fb95197c-0dde-4cf7-ace7-4d00e40f5d0f#033[00m
Dec  1 09:35:08 compute-0 nova_compute[189491]: 2025-12-01 09:35:08.365 189495 DEBUG oslo_concurrency.lockutils [None req-8f7e1c64-546d-4dcd-ab85-afb59b1be557 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lock "fb95197c-0dde-4cf7-ace7-4d00e40f5d0f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.539s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:35:11 compute-0 podman[247967]: 2025-12-01 09:35:11.724582777 +0000 UTC m=+0.098032516 container health_status e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team)
Dec  1 09:35:12 compute-0 nova_compute[189491]: 2025-12-01 09:35:12.810 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:35:12 compute-0 nova_compute[189491]: 2025-12-01 09:35:12.983 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:35:14 compute-0 podman[247987]: 2025-12-01 09:35:14.79251732 +0000 UTC m=+0.066939863 container health_status dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  1 09:35:14 compute-0 podman[248010]: 2025-12-01 09:35:14.908820768 +0000 UTC m=+0.091978529 container health_status f2d8639519b361376780abb83cb5a5e341c4f412720fb5852b2a4d261a69c359 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, vcs-type=git, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, name=ubi9, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, config_id=edpm, managed_by=edpm_ansible, io.openshift.expose-services=, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Dec  1 09:35:17 compute-0 nova_compute[189491]: 2025-12-01 09:35:17.813 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:35:17 compute-0 nova_compute[189491]: 2025-12-01 09:35:17.985 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:35:20 compute-0 podman[248031]: 2025-12-01 09:35:20.724801491 +0000 UTC m=+0.081720731 container health_status 110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, io.buildah.version=1.33.7, vendor=Red Hat, Inc., architecture=x86_64, config_id=edpm, name=ubi9-minimal, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, distribution-scope=public, vcs-type=git, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible)
Dec  1 09:35:20 compute-0 podman[248032]: 2025-12-01 09:35:20.725406396 +0000 UTC m=+0.082333516 container health_status f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Dec  1 09:35:22 compute-0 nova_compute[189491]: 2025-12-01 09:35:22.307 189495 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764581707.3067942, fb95197c-0dde-4cf7-ace7-4d00e40f5d0f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 09:35:22 compute-0 nova_compute[189491]: 2025-12-01 09:35:22.308 189495 INFO nova.compute.manager [-] [instance: fb95197c-0dde-4cf7-ace7-4d00e40f5d0f] VM Stopped (Lifecycle Event)#033[00m
Dec  1 09:35:22 compute-0 nova_compute[189491]: 2025-12-01 09:35:22.738 189495 DEBUG nova.compute.manager [None req-9ed12d7d-6c28-465a-ac9b-d1f1b5d5f4f8 - - - - - -] [instance: fb95197c-0dde-4cf7-ace7-4d00e40f5d0f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 09:35:22 compute-0 nova_compute[189491]: 2025-12-01 09:35:22.816 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:35:22 compute-0 nova_compute[189491]: 2025-12-01 09:35:22.987 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:35:24 compute-0 podman[248067]: 2025-12-01 09:35:24.712164176 +0000 UTC m=+0.080843310 container health_status 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec  1 09:35:24 compute-0 podman[248068]: 2025-12-01 09:35:24.735790199 +0000 UTC m=+0.102553696 container health_status 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible)
Dec  1 09:35:25 compute-0 systemd[1]: session-29.scope: Deactivated successfully.
Dec  1 09:35:25 compute-0 systemd[1]: session-29.scope: Consumed 1.028s CPU time.
Dec  1 09:35:25 compute-0 systemd-logind[792]: Session 29 logged out. Waiting for processes to exit.
Dec  1 09:35:25 compute-0 systemd-logind[792]: Removed session 29.
Dec  1 09:35:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:35:26.523 106659 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:35:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:35:26.525 106659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:35:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:35:26.526 106659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:35:27 compute-0 nova_compute[189491]: 2025-12-01 09:35:27.819 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:35:27 compute-0 nova_compute[189491]: 2025-12-01 09:35:27.991 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:35:29 compute-0 podman[203700]: time="2025-12-01T09:35:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 09:35:29 compute-0 podman[203700]: @ - - [01/Dec/2025:09:35:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Dec  1 09:35:29 compute-0 podman[203700]: @ - - [01/Dec/2025:09:35:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4814 "" "Go-http-client/1.1"
Dec  1 09:35:31 compute-0 openstack_network_exporter[205866]: ERROR   09:35:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:35:31 compute-0 openstack_network_exporter[205866]: ERROR   09:35:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:35:31 compute-0 openstack_network_exporter[205866]: ERROR   09:35:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 09:35:31 compute-0 openstack_network_exporter[205866]: ERROR   09:35:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 09:35:31 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:35:31 compute-0 openstack_network_exporter[205866]: ERROR   09:35:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 09:35:31 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:35:32 compute-0 nova_compute[189491]: 2025-12-01 09:35:32.714 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:35:32 compute-0 nova_compute[189491]: 2025-12-01 09:35:32.714 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 09:35:32 compute-0 nova_compute[189491]: 2025-12-01 09:35:32.821 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:35:32 compute-0 nova_compute[189491]: 2025-12-01 09:35:32.994 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:35:33 compute-0 nova_compute[189491]: 2025-12-01 09:35:33.992 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "refresh_cache-97dcaede-87ef-4c1c-a4a8-4ec9587cfe86" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 09:35:33 compute-0 nova_compute[189491]: 2025-12-01 09:35:33.993 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquired lock "refresh_cache-97dcaede-87ef-4c1c-a4a8-4ec9587cfe86" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 09:35:33 compute-0 nova_compute[189491]: 2025-12-01 09:35:33.993 189495 DEBUG nova.network.neutron [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] [instance: 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  1 09:35:35 compute-0 podman[248109]: 2025-12-01 09:35:35.701455458 +0000 UTC m=+0.065783775 container health_status 6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  1 09:35:35 compute-0 podman[248110]: 2025-12-01 09:35:35.707759611 +0000 UTC m=+0.072180520 container health_status ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec  1 09:35:37 compute-0 nova_compute[189491]: 2025-12-01 09:35:37.189 189495 DEBUG nova.network.neutron [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] [instance: 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86] Updating instance_info_cache with network_info: [{"id": "609b09f2-6c63-41e7-9850-15c0098f35b4", "address": "fa:16:3e:40:39:1e", "network": {"id": "52d15875-2a2e-463a-bc5d-8fa6b8466bff", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fac95b8a995a4174bfa966a8d9d9aa01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap609b09f2-6c", "ovs_interfaceid": "609b09f2-6c63-41e7-9850-15c0098f35b4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 09:35:37 compute-0 nova_compute[189491]: 2025-12-01 09:35:37.565 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Releasing lock "refresh_cache-97dcaede-87ef-4c1c-a4a8-4ec9587cfe86" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 09:35:37 compute-0 nova_compute[189491]: 2025-12-01 09:35:37.566 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] [instance: 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  1 09:35:37 compute-0 nova_compute[189491]: 2025-12-01 09:35:37.567 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:35:37 compute-0 nova_compute[189491]: 2025-12-01 09:35:37.567 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:35:37 compute-0 nova_compute[189491]: 2025-12-01 09:35:37.696 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:35:37 compute-0 nova_compute[189491]: 2025-12-01 09:35:37.697 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:35:37 compute-0 nova_compute[189491]: 2025-12-01 09:35:37.697 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:35:37 compute-0 nova_compute[189491]: 2025-12-01 09:35:37.697 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 09:35:37 compute-0 nova_compute[189491]: 2025-12-01 09:35:37.823 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:35:37 compute-0 nova_compute[189491]: 2025-12-01 09:35:37.998 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:35:38 compute-0 nova_compute[189491]: 2025-12-01 09:35:38.110 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:35:38 compute-0 nova_compute[189491]: 2025-12-01 09:35:38.172 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:35:38 compute-0 nova_compute[189491]: 2025-12-01 09:35:38.173 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:35:38 compute-0 nova_compute[189491]: 2025-12-01 09:35:38.244 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:35:38 compute-0 nova_compute[189491]: 2025-12-01 09:35:38.245 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:35:38 compute-0 nova_compute[189491]: 2025-12-01 09:35:38.312 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk.eph0 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:35:38 compute-0 nova_compute[189491]: 2025-12-01 09:35:38.313 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:35:38 compute-0 nova_compute[189491]: 2025-12-01 09:35:38.387 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk.eph0 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:35:38 compute-0 nova_compute[189491]: 2025-12-01 09:35:38.405 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:35:38 compute-0 nova_compute[189491]: 2025-12-01 09:35:38.471 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:35:38 compute-0 nova_compute[189491]: 2025-12-01 09:35:38.473 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:35:38 compute-0 nova_compute[189491]: 2025-12-01 09:35:38.535 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:35:38 compute-0 nova_compute[189491]: 2025-12-01 09:35:38.536 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:35:38 compute-0 nova_compute[189491]: 2025-12-01 09:35:38.593 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.eph0 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:35:38 compute-0 nova_compute[189491]: 2025-12-01 09:35:38.594 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:35:38 compute-0 nova_compute[189491]: 2025-12-01 09:35:38.655 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:35:38 compute-0 nova_compute[189491]: 2025-12-01 09:35:38.994 189495 WARNING nova.virt.libvirt.driver [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 09:35:38 compute-0 nova_compute[189491]: 2025-12-01 09:35:38.995 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4836MB free_disk=72.3349494934082GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 09:35:38 compute-0 nova_compute[189491]: 2025-12-01 09:35:38.996 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:35:38 compute-0 nova_compute[189491]: 2025-12-01 09:35:38.996 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:35:39 compute-0 nova_compute[189491]: 2025-12-01 09:35:39.100 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Instance 7ed22ffd-011d-48d7-962a-8606e471a59e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 09:35:39 compute-0 nova_compute[189491]: 2025-12-01 09:35:39.100 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Instance 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 09:35:39 compute-0 nova_compute[189491]: 2025-12-01 09:35:39.100 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 09:35:39 compute-0 nova_compute[189491]: 2025-12-01 09:35:39.101 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 09:35:39 compute-0 nova_compute[189491]: 2025-12-01 09:35:39.170 189495 DEBUG nova.compute.provider_tree [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Inventory has not changed in ProviderTree for provider: 143c7fe7-af1f-477a-978c-6a994d785d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 09:35:39 compute-0 nova_compute[189491]: 2025-12-01 09:35:39.199 189495 DEBUG nova.scheduler.client.report [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Inventory has not changed for provider 143c7fe7-af1f-477a-978c-6a994d785d98 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 09:35:39 compute-0 nova_compute[189491]: 2025-12-01 09:35:39.242 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 09:35:39 compute-0 nova_compute[189491]: 2025-12-01 09:35:39.242 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.246s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:35:40 compute-0 nova_compute[189491]: 2025-12-01 09:35:40.389 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:35:40 compute-0 nova_compute[189491]: 2025-12-01 09:35:40.470 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:35:40 compute-0 nova_compute[189491]: 2025-12-01 09:35:40.471 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:35:40 compute-0 nova_compute[189491]: 2025-12-01 09:35:40.715 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:35:41 compute-0 nova_compute[189491]: 2025-12-01 09:35:41.713 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:35:42 compute-0 nova_compute[189491]: 2025-12-01 09:35:42.708 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:35:42 compute-0 nova_compute[189491]: 2025-12-01 09:35:42.713 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:35:42 compute-0 nova_compute[189491]: 2025-12-01 09:35:42.713 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 09:35:42 compute-0 podman[248177]: 2025-12-01 09:35:42.743885604 +0000 UTC m=+0.109966681 container health_status e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3)
Dec  1 09:35:42 compute-0 nova_compute[189491]: 2025-12-01 09:35:42.826 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:35:43 compute-0 nova_compute[189491]: 2025-12-01 09:35:43.000 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:35:44 compute-0 systemd-logind[792]: New session 30 of user zuul.
Dec  1 09:35:44 compute-0 systemd[1]: Started Session 30 of User zuul.
Dec  1 09:35:45 compute-0 podman[248348]: 2025-12-01 09:35:45.468041083 +0000 UTC m=+0.082743569 container health_status dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  1 09:35:45 compute-0 podman[248349]: 2025-12-01 09:35:45.48143233 +0000 UTC m=+0.088735176 container health_status f2d8639519b361376780abb83cb5a5e341c4f412720fb5852b2a4d261a69c359 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, distribution-scope=public, vcs-type=git, version=9.4, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, io.openshift.expose-services=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, release-0.7.12=, managed_by=edpm_ansible, name=ubi9, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release=1214.1726694543)
Dec  1 09:35:45 compute-0 python3[248416]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep node_exporter#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 09:35:47 compute-0 nova_compute[189491]: 2025-12-01 09:35:47.830 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:35:48 compute-0 nova_compute[189491]: 2025-12-01 09:35:48.002 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:35:51 compute-0 podman[248457]: 2025-12-01 09:35:51.718142779 +0000 UTC m=+0.089751099 container health_status 110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_id=edpm, distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, container_name=openstack_network_exporter, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, version=9.6, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, vendor=Red Hat, Inc., release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  1 09:35:51 compute-0 podman[248458]: 2025-12-01 09:35:51.729306711 +0000 UTC m=+0.094817213 container health_status f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  1 09:35:52 compute-0 nova_compute[189491]: 2025-12-01 09:35:52.833 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:35:53 compute-0 nova_compute[189491]: 2025-12-01 09:35:53.005 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:35:53 compute-0 python3[248670]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep podman_exporter#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 09:35:55 compute-0 podman[248711]: 2025-12-01 09:35:55.757790213 +0000 UTC m=+0.108171428 container health_status 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 09:35:55 compute-0 podman[248710]: 2025-12-01 09:35:55.757894666 +0000 UTC m=+0.117862475 container health_status 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=multipathd)
Dec  1 09:35:57 compute-0 nova_compute[189491]: 2025-12-01 09:35:57.835 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:35:58 compute-0 nova_compute[189491]: 2025-12-01 09:35:58.008 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:35:59 compute-0 podman[203700]: time="2025-12-01T09:35:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 09:35:59 compute-0 podman[203700]: @ - - [01/Dec/2025:09:35:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Dec  1 09:35:59 compute-0 podman[203700]: @ - - [01/Dec/2025:09:35:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4812 "" "Go-http-client/1.1"
Dec  1 09:36:01 compute-0 openstack_network_exporter[205866]: ERROR   09:36:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 09:36:01 compute-0 openstack_network_exporter[205866]: ERROR   09:36:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:36:01 compute-0 openstack_network_exporter[205866]: ERROR   09:36:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:36:01 compute-0 openstack_network_exporter[205866]: ERROR   09:36:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 09:36:01 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:36:01 compute-0 openstack_network_exporter[205866]: ERROR   09:36:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 09:36:01 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:36:02 compute-0 nova_compute[189491]: 2025-12-01 09:36:02.838 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:36:03 compute-0 nova_compute[189491]: 2025-12-01 09:36:03.010 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:36:03 compute-0 python3[248925]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep kepler#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 09:36:06 compute-0 podman[248962]: 2025-12-01 09:36:06.712041309 +0000 UTC m=+0.079582111 container health_status ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, container_name=ceilometer_agent_compute)
Dec  1 09:36:06 compute-0 podman[248961]: 2025-12-01 09:36:06.721537401 +0000 UTC m=+0.093598203 container health_status 6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  1 09:36:07 compute-0 nova_compute[189491]: 2025-12-01 09:36:07.840 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:36:08 compute-0 nova_compute[189491]: 2025-12-01 09:36:08.012 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:36:12 compute-0 nova_compute[189491]: 2025-12-01 09:36:12.843 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:36:13 compute-0 nova_compute[189491]: 2025-12-01 09:36:13.014 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:36:13 compute-0 podman[249002]: 2025-12-01 09:36:13.696908804 +0000 UTC m=+0.072435466 container health_status e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_managed=true, config_id=edpm)
Dec  1 09:36:15 compute-0 podman[249022]: 2025-12-01 09:36:15.697185571 +0000 UTC m=+0.073194056 container health_status f2d8639519b361376780abb83cb5a5e341c4f412720fb5852b2a4d261a69c359 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, managed_by=edpm_ansible, release-0.7.12=, architecture=x86_64, build-date=2024-09-18T21:23:30, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=edpm, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, io.buildah.version=1.29.0, name=ubi9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., version=9.4)
Dec  1 09:36:15 compute-0 podman[249021]: 2025-12-01 09:36:15.708597749 +0000 UTC m=+0.088049408 container health_status dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  1 09:36:17 compute-0 nova_compute[189491]: 2025-12-01 09:36:17.845 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:36:18 compute-0 nova_compute[189491]: 2025-12-01 09:36:18.017 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:36:18 compute-0 python3[249237]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep openstack_network_exporter#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 09:36:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:19.787 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  1 09:36:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:19.787 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  1 09:36:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:19.787 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b020>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:36:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:19.788 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7ff84c98b0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:36:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:19.789 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84dc55100>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:36:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:19.789 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:36:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:19.789 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:36:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:19.789 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:36:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:19.789 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84ca1c260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:36:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:19.789 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:36:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:19.790 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:36:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:19.790 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84ff01af0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:36:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:19.790 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bb30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:36:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:19.790 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:36:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:19.790 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bb60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:36:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:19.790 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:36:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:19.790 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bbc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:36:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:19.790 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:36:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:19.791 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:36:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:19.791 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:36:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:19.791 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:36:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:19.791 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b5f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:36:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:19.791 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:36:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:19.791 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98be60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:36:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:19.792 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84f216690>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:36:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:19.792 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c9896d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:36:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:19.792 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:36:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:19.792 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98a720>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:36:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:19.792 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:36:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:19.794 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '7ed22ffd-011d-48d7-962a-8606e471a59e', 'name': 'test_0', 'flavor': {'id': '719a52fe-7f4b-48c0-b9dc-6a91d4ec600c', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '304c689d-2799-45ae-8166-517d5fd107b2'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'fac95b8a995a4174bfa966a8d9d9aa01', 'user_id': '962a55152ff34fdda5eae1f8aee3a7a9', 'hostId': '8e7812466a0145cddd68754a1627db4b56b0a23f372c34432aca44f1', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  1 09:36:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:19.798 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '97dcaede-87ef-4c1c-a4a8-4ec9587cfe86', 'name': 'vn-a75cfa3-aohxquokylp7-2qxsn2rwux5j-vnf-gncrlbwrk3ge', 'flavor': {'id': '719a52fe-7f4b-48c0-b9dc-6a91d4ec600c', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '304c689d-2799-45ae-8166-517d5fd107b2'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'fac95b8a995a4174bfa966a8d9d9aa01', 'user_id': '962a55152ff34fdda5eae1f8aee3a7a9', 'hostId': '8e7812466a0145cddd68754a1627db4b56b0a23f372c34432aca44f1', 'status': 'active', 'metadata': {'metering.server_group': '1555a697-b0aa-4429-98e7-26e6671e228d'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  1 09:36:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:19.798 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  1 09:36:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:19.798 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b020>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:36:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:19.798 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b020>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:36:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:19.798 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:36:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:19.799 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-01T09:36:19.798851) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:36:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:19.865 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:36:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:19.865 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:36:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:19.866 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:36:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:19.931 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:36:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:19.932 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:36:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:19.932 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:36:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:19.932 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  1 09:36:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:19.933 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7ff8501e1d00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:36:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:19.933 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  1 09:36:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:19.933 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84dc55100>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:36:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:19.933 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84dc55100>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:36:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:19.933 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:36:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:19.933 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-01T09:36:19.933387) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:36:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:19.955 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.allocation volume: 21831680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:36:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:19.956 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:36:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:19.956 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:36:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:19.979 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:36:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:19.979 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:36:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:19.980 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:36:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:19.980 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  1 09:36:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:19.981 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7ff84c98b110>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:36:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:19.981 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  1 09:36:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:19.981 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b140>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:36:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:19.981 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b140>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:36:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:19.981 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:36:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:19.981 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.read.latency volume: 476643826 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:36:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:19.982 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.read.latency volume: 112985408 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:36:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:19.982 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-01T09:36:19.981664) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:36:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:19.982 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.read.latency volume: 87581444 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:36:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:19.983 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.read.latency volume: 623315277 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:36:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:19.983 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.read.latency volume: 99798863 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:36:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:19.983 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.read.latency volume: 80231981 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:36:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:19.984 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  1 09:36:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:19.984 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7ff84c98b1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:36:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:19.984 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  1 09:36:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:19.984 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:36:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:19.984 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:36:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:19.984 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:36:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:19.985 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:36:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:19.985 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:36:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:19.985 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:36:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:19.986 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:36:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:19.986 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:36:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:19.986 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:36:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:19.987 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  1 09:36:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:19.987 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7ff84c98b200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:36:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:19.987 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  1 09:36:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:19.987 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:36:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:19.987 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:36:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:19.988 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:36:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:19.988 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:36:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:19.987 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-01T09:36:19.984813) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:36:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:19.988 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:36:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:19.988 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-01T09:36:19.987912) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:36:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:19.989 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:36:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:19.989 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.write.bytes volume: 41840640 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:36:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:19.989 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:36:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:19.990 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:36:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:19.990 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  1 09:36:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:19.990 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7ff84ca1c230>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:36:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:19.990 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  1 09:36:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:19.991 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84ca1c260>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:36:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:19.991 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84ca1c260>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:36:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:19.991 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:36:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:19.991 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-01T09:36:19.991393) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.017 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.043 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.044 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.044 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7ff84c98b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.045 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.045 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.045 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.045 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.045 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.write.latency volume: 1809136387 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.045 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.write.latency volume: 11785635 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.046 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.046 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-01T09:36:20.045273) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.046 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.write.latency volume: 664336258 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.046 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.write.latency volume: 9391906 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.046 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.047 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.047 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7ff84c98b2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.047 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.047 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.047 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.047 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.047 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.write.requests volume: 230 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.048 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-01T09:36:20.047642) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.048 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.048 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.048 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.write.requests volume: 231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.048 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.049 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.049 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.049 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7ff84c98b620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.049 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.049 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84ff01af0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.049 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84ff01af0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.049 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.050 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-01T09:36:20.049896) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.053 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/network.incoming.bytes volume: 2304 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.056 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/network.incoming.bytes volume: 1654 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.056 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.057 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7ff84c98b680>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.057 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.057 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7ff84c98b320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.057 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.057 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.057 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.057 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.058 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.058 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7ff84c98b920>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.058 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-01T09:36:20.057700) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.058 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.058 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bb60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.058 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bb60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.059 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.059 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-01T09:36:20.059029) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.059 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.059 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/network.incoming.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.059 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.059 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7ff84c98b380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.060 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.060 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b3b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.060 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b3b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.060 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.060 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.060 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7ff84c98bb90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.061 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.061 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bbc0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.061 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bbc0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.061 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.061 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.061 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.061 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-01T09:36:20.060284) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.062 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-01T09:36:20.061253) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.062 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.062 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7ff84c98bbf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.062 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.062 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bc20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.062 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bc20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.062 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.062 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-01T09:36:20.062556) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.062 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.063 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.063 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.063 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7ff84c98bc80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.063 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.063 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bcb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.063 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bcb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.063 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.063 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/network.outgoing.bytes volume: 2412 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.063 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/network.outgoing.bytes volume: 2426 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.064 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-01T09:36:20.063642) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.064 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.064 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7ff84c98bd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.064 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.064 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bd40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.064 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bd40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.064 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.065 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.065 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-01T09:36:20.064789) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.065 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.065 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.065 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7ff84c98bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.065 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.065 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7ff84c98b5c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.065 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.065 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b5f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.066 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b5f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.066 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.066 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/memory.usage volume: 48.79296875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.066 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/memory.usage volume: 48.91796875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.066 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-01T09:36:20.066143) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.066 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.066 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7ff84dc55040>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.067 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.067 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b650>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.067 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b650>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.067 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.067 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.067 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-01T09:36:20.067287) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.067 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.068 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.068 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7ff84c98be30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.068 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.068 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98be60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.068 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98be60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.068 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.068 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/network.outgoing.packets volume: 24 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.068 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.069 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.069 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7ff8503b1b80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.069 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.069 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84f216690>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.069 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84f216690>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.069 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-01T09:36:20.068435) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.069 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.069 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/cpu volume: 43040000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.069 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/cpu volume: 41130000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.070 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.070 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7ff84dab3f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.070 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.070 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c9896d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.070 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c9896d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.070 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.070 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.071 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.071 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-01T09:36:20.069581) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.071 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.071 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-01T09:36:20.070719) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.071 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.071 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.071 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.072 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.072 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7ff84c98bec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.072 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.072 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bef0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.072 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bef0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.072 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.072 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.073 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.073 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.074 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7ff84c98b170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.074 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.074 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98a720>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.074 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-01T09:36:20.072553) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.074 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98a720>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.075 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.075 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.075 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-01T09:36:20.075015) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.075 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.075 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.075 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.075 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.076 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.076 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.076 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7ff84c98bf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.076 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.076 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bf80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.076 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bf80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.076 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.077 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.077 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-01T09:36:20.076869) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.077 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.077 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.078 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.078 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.078 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.078 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.078 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.078 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.078 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.078 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.078 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.078 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.078 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.078 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.078 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.079 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.079 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.079 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.079 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.079 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.079 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.079 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.079 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.079 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.079 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.079 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.079 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:36:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:36:20.079 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:36:22 compute-0 podman[249277]: 2025-12-01 09:36:22.695022313 +0000 UTC m=+0.066524623 container health_status 110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., version=9.6, distribution-scope=public, io.openshift.expose-services=, architecture=x86_64, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, build-date=2025-08-20T13:12:41, name=ubi9-minimal, io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Dec  1 09:36:22 compute-0 podman[249278]: 2025-12-01 09:36:22.720657298 +0000 UTC m=+0.090437156 container health_status f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec  1 09:36:22 compute-0 nova_compute[189491]: 2025-12-01 09:36:22.850 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:36:23 compute-0 nova_compute[189491]: 2025-12-01 09:36:23.019 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:36:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:36:26.524 106659 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:36:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:36:26.525 106659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:36:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:36:26.525 106659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:36:26 compute-0 podman[249315]: 2025-12-01 09:36:26.687585453 +0000 UTC m=+0.063832018 container health_status 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  1 09:36:26 compute-0 podman[249316]: 2025-12-01 09:36:26.731240477 +0000 UTC m=+0.104161951 container health_status 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Dec  1 09:36:27 compute-0 nova_compute[189491]: 2025-12-01 09:36:27.849 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:36:28 compute-0 nova_compute[189491]: 2025-12-01 09:36:28.021 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:36:29 compute-0 podman[203700]: time="2025-12-01T09:36:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 09:36:29 compute-0 podman[203700]: @ - - [01/Dec/2025:09:36:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Dec  1 09:36:29 compute-0 podman[203700]: @ - - [01/Dec/2025:09:36:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4819 "" "Go-http-client/1.1"
Dec  1 09:36:31 compute-0 openstack_network_exporter[205866]: ERROR   09:36:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 09:36:31 compute-0 openstack_network_exporter[205866]: ERROR   09:36:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:36:31 compute-0 openstack_network_exporter[205866]: ERROR   09:36:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:36:31 compute-0 openstack_network_exporter[205866]: ERROR   09:36:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 09:36:31 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:36:31 compute-0 openstack_network_exporter[205866]: ERROR   09:36:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 09:36:31 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:36:32 compute-0 nova_compute[189491]: 2025-12-01 09:36:32.851 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:36:33 compute-0 nova_compute[189491]: 2025-12-01 09:36:33.023 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:36:33 compute-0 nova_compute[189491]: 2025-12-01 09:36:33.713 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:36:33 compute-0 nova_compute[189491]: 2025-12-01 09:36:33.714 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 09:36:33 compute-0 nova_compute[189491]: 2025-12-01 09:36:33.714 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 09:36:34 compute-0 nova_compute[189491]: 2025-12-01 09:36:34.099 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "refresh_cache-7ed22ffd-011d-48d7-962a-8606e471a59e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 09:36:34 compute-0 nova_compute[189491]: 2025-12-01 09:36:34.100 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquired lock "refresh_cache-7ed22ffd-011d-48d7-962a-8606e471a59e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 09:36:34 compute-0 nova_compute[189491]: 2025-12-01 09:36:34.100 189495 DEBUG nova.network.neutron [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] [instance: 7ed22ffd-011d-48d7-962a-8606e471a59e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  1 09:36:34 compute-0 nova_compute[189491]: 2025-12-01 09:36:34.100 189495 DEBUG nova.objects.instance [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 7ed22ffd-011d-48d7-962a-8606e471a59e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 09:36:35 compute-0 nova_compute[189491]: 2025-12-01 09:36:35.446 189495 DEBUG nova.network.neutron [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] [instance: 7ed22ffd-011d-48d7-962a-8606e471a59e] Updating instance_info_cache with network_info: [{"id": "1632735e-15c5-4d6b-a450-baa001b88ac2", "address": "fa:16:3e:d4:bd:b4", "network": {"id": "52d15875-2a2e-463a-bc5d-8fa6b8466bff", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.55", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fac95b8a995a4174bfa966a8d9d9aa01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1632735e-15", "ovs_interfaceid": "1632735e-15c5-4d6b-a450-baa001b88ac2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 09:36:35 compute-0 nova_compute[189491]: 2025-12-01 09:36:35.482 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Releasing lock "refresh_cache-7ed22ffd-011d-48d7-962a-8606e471a59e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 09:36:35 compute-0 nova_compute[189491]: 2025-12-01 09:36:35.482 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] [instance: 7ed22ffd-011d-48d7-962a-8606e471a59e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  1 09:36:35 compute-0 nova_compute[189491]: 2025-12-01 09:36:35.714 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:36:35 compute-0 nova_compute[189491]: 2025-12-01 09:36:35.738 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:36:35 compute-0 nova_compute[189491]: 2025-12-01 09:36:35.738 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:36:35 compute-0 nova_compute[189491]: 2025-12-01 09:36:35.739 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:36:35 compute-0 nova_compute[189491]: 2025-12-01 09:36:35.740 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 09:36:35 compute-0 nova_compute[189491]: 2025-12-01 09:36:35.814 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:36:35 compute-0 nova_compute[189491]: 2025-12-01 09:36:35.872 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:36:35 compute-0 nova_compute[189491]: 2025-12-01 09:36:35.873 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:36:35 compute-0 nova_compute[189491]: 2025-12-01 09:36:35.944 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:36:35 compute-0 nova_compute[189491]: 2025-12-01 09:36:35.945 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:36:36 compute-0 nova_compute[189491]: 2025-12-01 09:36:36.014 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk.eph0 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:36:36 compute-0 nova_compute[189491]: 2025-12-01 09:36:36.016 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:36:36 compute-0 nova_compute[189491]: 2025-12-01 09:36:36.080 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk.eph0 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:36:36 compute-0 nova_compute[189491]: 2025-12-01 09:36:36.090 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:36:36 compute-0 nova_compute[189491]: 2025-12-01 09:36:36.149 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:36:36 compute-0 nova_compute[189491]: 2025-12-01 09:36:36.150 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:36:36 compute-0 nova_compute[189491]: 2025-12-01 09:36:36.219 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:36:36 compute-0 nova_compute[189491]: 2025-12-01 09:36:36.220 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:36:36 compute-0 nova_compute[189491]: 2025-12-01 09:36:36.279 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.eph0 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:36:36 compute-0 nova_compute[189491]: 2025-12-01 09:36:36.281 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:36:36 compute-0 nova_compute[189491]: 2025-12-01 09:36:36.340 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.eph0 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:36:36 compute-0 nova_compute[189491]: 2025-12-01 09:36:36.717 189495 WARNING nova.virt.libvirt.driver [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 09:36:36 compute-0 nova_compute[189491]: 2025-12-01 09:36:36.718 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4799MB free_disk=72.33493041992188GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 09:36:36 compute-0 nova_compute[189491]: 2025-12-01 09:36:36.718 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:36:36 compute-0 nova_compute[189491]: 2025-12-01 09:36:36.719 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:36:36 compute-0 nova_compute[189491]: 2025-12-01 09:36:36.814 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Instance 7ed22ffd-011d-48d7-962a-8606e471a59e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 09:36:36 compute-0 nova_compute[189491]: 2025-12-01 09:36:36.815 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Instance 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 09:36:36 compute-0 nova_compute[189491]: 2025-12-01 09:36:36.815 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 09:36:36 compute-0 nova_compute[189491]: 2025-12-01 09:36:36.815 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 09:36:36 compute-0 nova_compute[189491]: 2025-12-01 09:36:36.879 189495 DEBUG nova.compute.provider_tree [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Inventory has not changed in ProviderTree for provider: 143c7fe7-af1f-477a-978c-6a994d785d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 09:36:36 compute-0 nova_compute[189491]: 2025-12-01 09:36:36.938 189495 DEBUG nova.scheduler.client.report [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Inventory has not changed for provider 143c7fe7-af1f-477a-978c-6a994d785d98 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 09:36:36 compute-0 nova_compute[189491]: 2025-12-01 09:36:36.940 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 09:36:36 compute-0 nova_compute[189491]: 2025-12-01 09:36:36.940 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.221s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:36:37 compute-0 podman[249385]: 2025-12-01 09:36:37.714931094 +0000 UTC m=+0.081544909 container health_status ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec  1 09:36:37 compute-0 podman[249384]: 2025-12-01 09:36:37.718741707 +0000 UTC m=+0.093908050 container health_status 6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  1 09:36:37 compute-0 nova_compute[189491]: 2025-12-01 09:36:37.853 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:36:38 compute-0 nova_compute[189491]: 2025-12-01 09:36:38.025 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:36:39 compute-0 nova_compute[189491]: 2025-12-01 09:36:39.940 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:36:40 compute-0 nova_compute[189491]: 2025-12-01 09:36:40.714 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:36:40 compute-0 nova_compute[189491]: 2025-12-01 09:36:40.714 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:36:41 compute-0 nova_compute[189491]: 2025-12-01 09:36:41.715 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:36:42 compute-0 nova_compute[189491]: 2025-12-01 09:36:42.709 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:36:42 compute-0 nova_compute[189491]: 2025-12-01 09:36:42.857 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:36:43 compute-0 nova_compute[189491]: 2025-12-01 09:36:43.027 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:36:43 compute-0 nova_compute[189491]: 2025-12-01 09:36:43.714 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:36:43 compute-0 nova_compute[189491]: 2025-12-01 09:36:43.715 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:36:43 compute-0 nova_compute[189491]: 2025-12-01 09:36:43.715 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 09:36:44 compute-0 podman[249427]: 2025-12-01 09:36:44.727741476 +0000 UTC m=+0.103385502 container health_status e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 09:36:46 compute-0 podman[249446]: 2025-12-01 09:36:46.70333584 +0000 UTC m=+0.075131103 container health_status dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  1 09:36:46 compute-0 podman[249447]: 2025-12-01 09:36:46.7049694 +0000 UTC m=+0.075139042 container health_status f2d8639519b361376780abb83cb5a5e341c4f412720fb5852b2a4d261a69c359 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, version=9.4, com.redhat.component=ubi9-container, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, maintainer=Red Hat, Inc., config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, name=ubi9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, vendor=Red Hat, Inc.)
Dec  1 09:36:47 compute-0 nova_compute[189491]: 2025-12-01 09:36:47.860 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:36:48 compute-0 nova_compute[189491]: 2025-12-01 09:36:48.030 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:36:48 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Dec  1 09:36:52 compute-0 nova_compute[189491]: 2025-12-01 09:36:52.867 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:36:53 compute-0 nova_compute[189491]: 2025-12-01 09:36:53.032 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:36:53 compute-0 podman[249491]: 2025-12-01 09:36:53.698251757 +0000 UTC m=+0.076725502 container health_status 110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, io.openshift.expose-services=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, container_name=openstack_network_exporter, vendor=Red Hat, Inc., managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, architecture=x86_64, config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container)
Dec  1 09:36:53 compute-0 podman[249492]: 2025-12-01 09:36:53.719446384 +0000 UTC m=+0.086809048 container health_status f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  1 09:36:57 compute-0 podman[249530]: 2025-12-01 09:36:57.753565734 +0000 UTC m=+0.123936953 container health_status 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 09:36:57 compute-0 podman[249531]: 2025-12-01 09:36:57.80631207 +0000 UTC m=+0.163476147 container health_status 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Dec  1 09:36:57 compute-0 nova_compute[189491]: 2025-12-01 09:36:57.869 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:36:58 compute-0 nova_compute[189491]: 2025-12-01 09:36:58.035 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:36:59 compute-0 podman[203700]: time="2025-12-01T09:36:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 09:36:59 compute-0 podman[203700]: @ - - [01/Dec/2025:09:36:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Dec  1 09:36:59 compute-0 podman[203700]: @ - - [01/Dec/2025:09:36:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4814 "" "Go-http-client/1.1"
Dec  1 09:37:01 compute-0 openstack_network_exporter[205866]: ERROR   09:37:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:37:01 compute-0 openstack_network_exporter[205866]: ERROR   09:37:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:37:01 compute-0 openstack_network_exporter[205866]: ERROR   09:37:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 09:37:01 compute-0 openstack_network_exporter[205866]: ERROR   09:37:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 09:37:01 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:37:01 compute-0 openstack_network_exporter[205866]: ERROR   09:37:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 09:37:01 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:37:02 compute-0 nova_compute[189491]: 2025-12-01 09:37:02.871 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:37:03 compute-0 nova_compute[189491]: 2025-12-01 09:37:03.041 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:37:07 compute-0 nova_compute[189491]: 2025-12-01 09:37:07.874 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:37:08 compute-0 nova_compute[189491]: 2025-12-01 09:37:08.044 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:37:08 compute-0 podman[249574]: 2025-12-01 09:37:08.725108384 +0000 UTC m=+0.081612410 container health_status ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec  1 09:37:08 compute-0 podman[249573]: 2025-12-01 09:37:08.737256611 +0000 UTC m=+0.111596113 container health_status 6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  1 09:37:12 compute-0 nova_compute[189491]: 2025-12-01 09:37:12.876 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:37:13 compute-0 nova_compute[189491]: 2025-12-01 09:37:13.046 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:37:15 compute-0 podman[249616]: 2025-12-01 09:37:15.768016429 +0000 UTC m=+0.126481616 container health_status e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  1 09:37:17 compute-0 podman[249634]: 2025-12-01 09:37:17.704129762 +0000 UTC m=+0.066736988 container health_status dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  1 09:37:17 compute-0 podman[249635]: 2025-12-01 09:37:17.747663343 +0000 UTC m=+0.106955629 container health_status f2d8639519b361376780abb83cb5a5e341c4f412720fb5852b2a4d261a69c359 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, architecture=x86_64, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, vendor=Red Hat, Inc., config_id=edpm, name=ubi9, version=9.4, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release-0.7.12=)
Dec  1 09:37:17 compute-0 nova_compute[189491]: 2025-12-01 09:37:17.880 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:37:18 compute-0 nova_compute[189491]: 2025-12-01 09:37:18.049 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:37:18 compute-0 systemd[1]: session-30.scope: Deactivated successfully.
Dec  1 09:37:18 compute-0 systemd[1]: session-30.scope: Consumed 3.921s CPU time.
Dec  1 09:37:18 compute-0 systemd-logind[792]: Session 30 logged out. Waiting for processes to exit.
Dec  1 09:37:18 compute-0 systemd-logind[792]: Removed session 30.
Dec  1 09:37:22 compute-0 nova_compute[189491]: 2025-12-01 09:37:22.884 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:37:23 compute-0 nova_compute[189491]: 2025-12-01 09:37:23.053 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:37:24 compute-0 podman[249673]: 2025-12-01 09:37:24.708821661 +0000 UTC m=+0.071037843 container health_status 110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, container_name=openstack_network_exporter, managed_by=edpm_ansible, release=1755695350, config_id=edpm, name=ubi9-minimal, io.buildah.version=1.33.7)
Dec  1 09:37:24 compute-0 podman[249674]: 2025-12-01 09:37:24.73214701 +0000 UTC m=+0.089451992 container health_status f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent)
Dec  1 09:37:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:37:26.526 106659 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:37:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:37:26.528 106659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:37:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:37:26.529 106659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:37:27 compute-0 nova_compute[189491]: 2025-12-01 09:37:27.716 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:37:27 compute-0 nova_compute[189491]: 2025-12-01 09:37:27.720 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  1 09:37:27 compute-0 nova_compute[189491]: 2025-12-01 09:37:27.886 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:37:28 compute-0 nova_compute[189491]: 2025-12-01 09:37:28.057 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:37:28 compute-0 podman[249713]: 2025-12-01 09:37:28.730784005 +0000 UTC m=+0.091578665 container health_status 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3)
Dec  1 09:37:28 compute-0 podman[249714]: 2025-12-01 09:37:28.751194092 +0000 UTC m=+0.115245041 container health_status 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3)
Dec  1 09:37:29 compute-0 podman[203700]: time="2025-12-01T09:37:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 09:37:29 compute-0 podman[203700]: @ - - [01/Dec/2025:09:37:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Dec  1 09:37:29 compute-0 podman[203700]: @ - - [01/Dec/2025:09:37:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4811 "" "Go-http-client/1.1"
Dec  1 09:37:31 compute-0 openstack_network_exporter[205866]: ERROR   09:37:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:37:31 compute-0 openstack_network_exporter[205866]: ERROR   09:37:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 09:37:31 compute-0 openstack_network_exporter[205866]: ERROR   09:37:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:37:31 compute-0 openstack_network_exporter[205866]: ERROR   09:37:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 09:37:31 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:37:31 compute-0 openstack_network_exporter[205866]: ERROR   09:37:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 09:37:31 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:37:32 compute-0 nova_compute[189491]: 2025-12-01 09:37:32.888 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:37:33 compute-0 nova_compute[189491]: 2025-12-01 09:37:33.062 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:37:33 compute-0 nova_compute[189491]: 2025-12-01 09:37:33.754 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:37:33 compute-0 nova_compute[189491]: 2025-12-01 09:37:33.754 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 09:37:34 compute-0 nova_compute[189491]: 2025-12-01 09:37:34.420 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "refresh_cache-97dcaede-87ef-4c1c-a4a8-4ec9587cfe86" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 09:37:34 compute-0 nova_compute[189491]: 2025-12-01 09:37:34.421 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquired lock "refresh_cache-97dcaede-87ef-4c1c-a4a8-4ec9587cfe86" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 09:37:34 compute-0 nova_compute[189491]: 2025-12-01 09:37:34.421 189495 DEBUG nova.network.neutron [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] [instance: 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  1 09:37:35 compute-0 nova_compute[189491]: 2025-12-01 09:37:35.859 189495 DEBUG nova.network.neutron [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] [instance: 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86] Updating instance_info_cache with network_info: [{"id": "609b09f2-6c63-41e7-9850-15c0098f35b4", "address": "fa:16:3e:40:39:1e", "network": {"id": "52d15875-2a2e-463a-bc5d-8fa6b8466bff", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fac95b8a995a4174bfa966a8d9d9aa01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap609b09f2-6c", "ovs_interfaceid": "609b09f2-6c63-41e7-9850-15c0098f35b4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 09:37:35 compute-0 nova_compute[189491]: 2025-12-01 09:37:35.893 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Releasing lock "refresh_cache-97dcaede-87ef-4c1c-a4a8-4ec9587cfe86" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 09:37:35 compute-0 nova_compute[189491]: 2025-12-01 09:37:35.895 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] [instance: 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  1 09:37:37 compute-0 nova_compute[189491]: 2025-12-01 09:37:37.714 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:37:37 compute-0 nova_compute[189491]: 2025-12-01 09:37:37.746 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:37:37 compute-0 nova_compute[189491]: 2025-12-01 09:37:37.746 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:37:37 compute-0 nova_compute[189491]: 2025-12-01 09:37:37.747 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:37:37 compute-0 nova_compute[189491]: 2025-12-01 09:37:37.747 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 09:37:37 compute-0 nova_compute[189491]: 2025-12-01 09:37:37.829 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:37:37 compute-0 nova_compute[189491]: 2025-12-01 09:37:37.891 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:37:37 compute-0 nova_compute[189491]: 2025-12-01 09:37:37.895 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:37:37 compute-0 nova_compute[189491]: 2025-12-01 09:37:37.896 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:37:37 compute-0 nova_compute[189491]: 2025-12-01 09:37:37.961 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:37:37 compute-0 nova_compute[189491]: 2025-12-01 09:37:37.962 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:37:38 compute-0 nova_compute[189491]: 2025-12-01 09:37:38.023 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:37:38 compute-0 nova_compute[189491]: 2025-12-01 09:37:38.025 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:37:38 compute-0 nova_compute[189491]: 2025-12-01 09:37:38.064 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:37:38 compute-0 nova_compute[189491]: 2025-12-01 09:37:38.101 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk.eph0 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:37:38 compute-0 nova_compute[189491]: 2025-12-01 09:37:38.110 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:37:38 compute-0 nova_compute[189491]: 2025-12-01 09:37:38.177 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:37:38 compute-0 nova_compute[189491]: 2025-12-01 09:37:38.178 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:37:38 compute-0 nova_compute[189491]: 2025-12-01 09:37:38.239 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:37:38 compute-0 nova_compute[189491]: 2025-12-01 09:37:38.240 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:37:38 compute-0 nova_compute[189491]: 2025-12-01 09:37:38.321 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.eph0 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:37:38 compute-0 nova_compute[189491]: 2025-12-01 09:37:38.322 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:37:38 compute-0 nova_compute[189491]: 2025-12-01 09:37:38.383 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.eph0 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:37:38 compute-0 nova_compute[189491]: 2025-12-01 09:37:38.767 189495 WARNING nova.virt.libvirt.driver [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 09:37:38 compute-0 nova_compute[189491]: 2025-12-01 09:37:38.768 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4800MB free_disk=72.33493041992188GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 09:37:38 compute-0 nova_compute[189491]: 2025-12-01 09:37:38.769 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:37:38 compute-0 nova_compute[189491]: 2025-12-01 09:37:38.769 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:37:38 compute-0 nova_compute[189491]: 2025-12-01 09:37:38.866 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Instance 7ed22ffd-011d-48d7-962a-8606e471a59e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 09:37:38 compute-0 nova_compute[189491]: 2025-12-01 09:37:38.866 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Instance 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 09:37:38 compute-0 nova_compute[189491]: 2025-12-01 09:37:38.866 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 09:37:38 compute-0 nova_compute[189491]: 2025-12-01 09:37:38.867 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 09:37:38 compute-0 nova_compute[189491]: 2025-12-01 09:37:38.884 189495 DEBUG nova.scheduler.client.report [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Refreshing inventories for resource provider 143c7fe7-af1f-477a-978c-6a994d785d98 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  1 09:37:38 compute-0 nova_compute[189491]: 2025-12-01 09:37:38.901 189495 DEBUG nova.scheduler.client.report [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Updating ProviderTree inventory for provider 143c7fe7-af1f-477a-978c-6a994d785d98 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  1 09:37:38 compute-0 nova_compute[189491]: 2025-12-01 09:37:38.901 189495 DEBUG nova.compute.provider_tree [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Updating inventory in ProviderTree for provider 143c7fe7-af1f-477a-978c-6a994d785d98 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  1 09:37:38 compute-0 nova_compute[189491]: 2025-12-01 09:37:38.917 189495 DEBUG nova.scheduler.client.report [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Refreshing aggregate associations for resource provider 143c7fe7-af1f-477a-978c-6a994d785d98, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  1 09:37:38 compute-0 nova_compute[189491]: 2025-12-01 09:37:38.939 189495 DEBUG nova.scheduler.client.report [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Refreshing trait associations for resource provider 143c7fe7-af1f-477a-978c-6a994d785d98, traits: COMPUTE_STORAGE_BUS_FDC,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_FMA3,HW_CPU_X86_SVM,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_AVX,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SHA,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_AVX2,HW_CPU_X86_ABM,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_USB,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_MMX,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE2,COMPUTE_ACCELERATORS,HW_CPU_X86_F16C,HW_CPU_X86_SSE42,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE41,HW_CPU_X86_BMI2,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE4A,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_CIRRUS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  1 09:37:39 compute-0 nova_compute[189491]: 2025-12-01 09:37:39.007 189495 DEBUG nova.compute.provider_tree [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Inventory has not changed in ProviderTree for provider: 143c7fe7-af1f-477a-978c-6a994d785d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 09:37:39 compute-0 nova_compute[189491]: 2025-12-01 09:37:39.023 189495 DEBUG nova.scheduler.client.report [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Inventory has not changed for provider 143c7fe7-af1f-477a-978c-6a994d785d98 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 09:37:39 compute-0 nova_compute[189491]: 2025-12-01 09:37:39.025 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 09:37:39 compute-0 nova_compute[189491]: 2025-12-01 09:37:39.025 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.256s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:37:39 compute-0 podman[249778]: 2025-12-01 09:37:39.704288848 +0000 UTC m=+0.072793966 container health_status ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image)
Dec  1 09:37:39 compute-0 podman[249777]: 2025-12-01 09:37:39.728844477 +0000 UTC m=+0.100808879 container health_status 6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  1 09:37:40 compute-0 nova_compute[189491]: 2025-12-01 09:37:40.025 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:37:40 compute-0 nova_compute[189491]: 2025-12-01 09:37:40.714 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:37:40 compute-0 nova_compute[189491]: 2025-12-01 09:37:40.714 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:37:40 compute-0 nova_compute[189491]: 2025-12-01 09:37:40.715 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:37:41 compute-0 nova_compute[189491]: 2025-12-01 09:37:41.920 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:37:41 compute-0 nova_compute[189491]: 2025-12-01 09:37:41.920 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec  1 09:37:41 compute-0 nova_compute[189491]: 2025-12-01 09:37:41.953 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec  1 09:37:42 compute-0 nova_compute[189491]: 2025-12-01 09:37:42.742 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:37:42 compute-0 nova_compute[189491]: 2025-12-01 09:37:42.894 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:37:43 compute-0 nova_compute[189491]: 2025-12-01 09:37:43.067 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:37:43 compute-0 nova_compute[189491]: 2025-12-01 09:37:43.708 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:37:43 compute-0 nova_compute[189491]: 2025-12-01 09:37:43.837 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:37:44 compute-0 nova_compute[189491]: 2025-12-01 09:37:44.714 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:37:45 compute-0 nova_compute[189491]: 2025-12-01 09:37:45.221 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:37:45 compute-0 nova_compute[189491]: 2025-12-01 09:37:45.746 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:37:45 compute-0 nova_compute[189491]: 2025-12-01 09:37:45.747 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 09:37:46 compute-0 podman[249818]: 2025-12-01 09:37:46.715581158 +0000 UTC m=+0.090437267 container health_status e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team)
Dec  1 09:37:47 compute-0 nova_compute[189491]: 2025-12-01 09:37:47.897 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:37:48 compute-0 nova_compute[189491]: 2025-12-01 09:37:48.069 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:37:48 compute-0 podman[249838]: 2025-12-01 09:37:48.701600457 +0000 UTC m=+0.072187651 container health_status dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  1 09:37:48 compute-0 podman[249839]: 2025-12-01 09:37:48.739529252 +0000 UTC m=+0.106380585 container health_status f2d8639519b361376780abb83cb5a5e341c4f412720fb5852b2a4d261a69c359 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, release=1214.1726694543, version=9.4, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, config_id=edpm, vcs-type=git, container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Dec  1 09:37:52 compute-0 nova_compute[189491]: 2025-12-01 09:37:52.899 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:37:53 compute-0 nova_compute[189491]: 2025-12-01 09:37:53.072 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:37:55 compute-0 podman[249884]: 2025-12-01 09:37:55.706258875 +0000 UTC m=+0.077657704 container health_status f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec  1 09:37:55 compute-0 podman[249883]: 2025-12-01 09:37:55.706301696 +0000 UTC m=+0.081210551 container health_status 110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, managed_by=edpm_ansible, vcs-type=git, distribution-scope=public, name=ubi9-minimal, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vendor=Red Hat, Inc., config_id=edpm, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7)
Dec  1 09:37:57 compute-0 nova_compute[189491]: 2025-12-01 09:37:57.902 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:37:58 compute-0 nova_compute[189491]: 2025-12-01 09:37:58.075 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:37:59 compute-0 podman[249919]: 2025-12-01 09:37:59.716775318 +0000 UTC m=+0.083028625 container health_status 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec  1 09:37:59 compute-0 podman[203700]: time="2025-12-01T09:37:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 09:37:59 compute-0 podman[203700]: @ - - [01/Dec/2025:09:37:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Dec  1 09:37:59 compute-0 podman[203700]: @ - - [01/Dec/2025:09:37:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4808 "" "Go-http-client/1.1"
Dec  1 09:37:59 compute-0 podman[249920]: 2025-12-01 09:37:59.785605697 +0000 UTC m=+0.146546155 container health_status 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125)
Dec  1 09:38:01 compute-0 openstack_network_exporter[205866]: ERROR   09:38:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 09:38:01 compute-0 openstack_network_exporter[205866]: ERROR   09:38:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:38:01 compute-0 openstack_network_exporter[205866]: ERROR   09:38:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:38:01 compute-0 openstack_network_exporter[205866]: ERROR   09:38:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 09:38:01 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:38:01 compute-0 openstack_network_exporter[205866]: ERROR   09:38:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 09:38:01 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:38:02 compute-0 nova_compute[189491]: 2025-12-01 09:38:02.904 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:38:03 compute-0 nova_compute[189491]: 2025-12-01 09:38:03.077 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:38:07 compute-0 nova_compute[189491]: 2025-12-01 09:38:07.906 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:38:08 compute-0 nova_compute[189491]: 2025-12-01 09:38:08.079 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:38:10 compute-0 nova_compute[189491]: 2025-12-01 09:38:10.112 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:38:10 compute-0 nova_compute[189491]: 2025-12-01 09:38:10.136 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Triggering sync for uuid 7ed22ffd-011d-48d7-962a-8606e471a59e _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Dec  1 09:38:10 compute-0 nova_compute[189491]: 2025-12-01 09:38:10.137 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Triggering sync for uuid 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Dec  1 09:38:10 compute-0 nova_compute[189491]: 2025-12-01 09:38:10.137 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "7ed22ffd-011d-48d7-962a-8606e471a59e" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:38:10 compute-0 nova_compute[189491]: 2025-12-01 09:38:10.138 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "7ed22ffd-011d-48d7-962a-8606e471a59e" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:38:10 compute-0 nova_compute[189491]: 2025-12-01 09:38:10.139 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "97dcaede-87ef-4c1c-a4a8-4ec9587cfe86" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:38:10 compute-0 nova_compute[189491]: 2025-12-01 09:38:10.139 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "97dcaede-87ef-4c1c-a4a8-4ec9587cfe86" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:38:10 compute-0 nova_compute[189491]: 2025-12-01 09:38:10.169 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "7ed22ffd-011d-48d7-962a-8606e471a59e" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.031s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:38:10 compute-0 nova_compute[189491]: 2025-12-01 09:38:10.170 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "97dcaede-87ef-4c1c-a4a8-4ec9587cfe86" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.031s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:38:10 compute-0 podman[249964]: 2025-12-01 09:38:10.704912211 +0000 UTC m=+0.079359946 container health_status 6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  1 09:38:10 compute-0 podman[249965]: 2025-12-01 09:38:10.740696244 +0000 UTC m=+0.106518499 container health_status ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251125)
Dec  1 09:38:12 compute-0 nova_compute[189491]: 2025-12-01 09:38:12.908 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:38:13 compute-0 nova_compute[189491]: 2025-12-01 09:38:13.081 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:38:17 compute-0 podman[250005]: 2025-12-01 09:38:17.747898643 +0000 UTC m=+0.115454846 container health_status e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  1 09:38:17 compute-0 nova_compute[189491]: 2025-12-01 09:38:17.912 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:38:18 compute-0 nova_compute[189491]: 2025-12-01 09:38:18.084 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:38:19 compute-0 podman[250026]: 2025-12-01 09:38:19.698294603 +0000 UTC m=+0.077177453 container health_status dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 09:38:19 compute-0 podman[250027]: 2025-12-01 09:38:19.717346568 +0000 UTC m=+0.091769399 container health_status f2d8639519b361376780abb83cb5a5e341c4f412720fb5852b2a4d261a69c359 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, version=9.4, io.buildah.version=1.29.0, vcs-type=git, io.openshift.tags=base rhel9, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, managed_by=edpm_ansible)
Dec  1 09:38:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:19.787 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  1 09:38:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:19.788 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  1 09:38:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:19.788 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b020>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:38:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:19.789 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7ff84c98b0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:38:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:19.790 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84dc55100>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:38:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:19.790 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:38:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:19.790 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:38:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:19.791 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:38:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:19.791 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84ca1c260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:38:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:19.791 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:38:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:19.791 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:38:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:19.792 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84ff01af0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:38:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:19.792 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bb30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:38:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:19.792 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:38:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:19.793 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bb60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:38:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:19.793 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:38:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:19.793 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bbc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:38:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:19.794 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:38:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:19.794 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:38:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:19.794 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:38:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:19.794 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:38:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:19.795 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b5f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:38:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:19.795 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:38:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:19.795 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98be60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:38:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:19.795 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84f216690>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:38:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:19.796 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c9896d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:38:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:19.796 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:38:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:19.796 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98a720>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:38:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:19.797 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c4fcfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:38:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:19.797 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '7ed22ffd-011d-48d7-962a-8606e471a59e', 'name': 'test_0', 'flavor': {'id': '719a52fe-7f4b-48c0-b9dc-6a91d4ec600c', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '304c689d-2799-45ae-8166-517d5fd107b2'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'fac95b8a995a4174bfa966a8d9d9aa01', 'user_id': '962a55152ff34fdda5eae1f8aee3a7a9', 'hostId': '8e7812466a0145cddd68754a1627db4b56b0a23f372c34432aca44f1', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  1 09:38:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:19.800 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '97dcaede-87ef-4c1c-a4a8-4ec9587cfe86', 'name': 'vn-a75cfa3-aohxquokylp7-2qxsn2rwux5j-vnf-gncrlbwrk3ge', 'flavor': {'id': '719a52fe-7f4b-48c0-b9dc-6a91d4ec600c', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '304c689d-2799-45ae-8166-517d5fd107b2'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'fac95b8a995a4174bfa966a8d9d9aa01', 'user_id': '962a55152ff34fdda5eae1f8aee3a7a9', 'hostId': '8e7812466a0145cddd68754a1627db4b56b0a23f372c34432aca44f1', 'status': 'active', 'metadata': {'metering.server_group': '1555a697-b0aa-4429-98e7-26e6671e228d'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  1 09:38:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:19.801 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  1 09:38:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:19.801 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b020>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:38:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:19.801 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b020>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:38:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:19.801 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:38:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:19.802 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-01T09:38:19.801353) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:38:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:19.900 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:38:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:19.901 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:38:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:19.901 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.007 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.008 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.008 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.009 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.009 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7ff8501e1d00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.010 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.010 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84dc55100>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.010 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84dc55100>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.010 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.011 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-01T09:38:20.010518) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.038 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.allocation volume: 21831680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.039 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.039 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.070 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.071 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.071 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.072 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.072 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7ff84c98b110>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.072 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.073 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b140>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.073 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b140>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.073 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.073 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.read.latency volume: 476643826 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.073 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.read.latency volume: 112985408 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.074 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.read.latency volume: 87581444 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.074 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.read.latency volume: 623315277 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.074 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.read.latency volume: 99798863 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.075 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.read.latency volume: 80231981 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.075 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.076 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7ff84c98b1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.076 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.076 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.076 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.076 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.076 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.076 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.077 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.077 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.077 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.078 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.078 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-01T09:38:20.073225) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.079 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-01T09:38:20.076499) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.079 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.079 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7ff84c98b200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.079 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.079 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.079 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.079 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.079 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.080 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.080 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.080 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-01T09:38:20.079766) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.081 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.write.bytes volume: 41840640 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.081 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.081 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.082 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.082 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7ff84ca1c230>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.082 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.082 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84ca1c260>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.082 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84ca1c260>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.083 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.084 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-01T09:38:20.083052) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.108 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.130 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.130 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.130 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7ff84c98b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.131 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.131 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.131 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.131 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.131 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.write.latency volume: 1809136387 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.131 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.write.latency volume: 11785635 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.132 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.132 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.write.latency volume: 664336258 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.132 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.write.latency volume: 9391906 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.132 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.133 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.133 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7ff84c98b2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.133 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.133 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.134 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.134 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.134 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-01T09:38:20.131440) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.134 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.write.requests volume: 230 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.134 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.134 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.135 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.write.requests volume: 231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.135 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.135 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.136 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.136 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7ff84c98b620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.136 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.136 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84ff01af0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.136 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84ff01af0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.136 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.137 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-01T09:38:20.134159) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.137 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-01T09:38:20.136480) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.139 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/network.incoming.bytes volume: 2304 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.143 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/network.incoming.bytes volume: 1654 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.143 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.143 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7ff84c98b680>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.143 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.143 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7ff84c98b320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.144 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.144 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.144 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.144 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.144 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.145 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7ff84c98b920>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.145 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.145 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-01T09:38:20.144372) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.145 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bb60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.146 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bb60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.146 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.146 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.146 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/network.incoming.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.146 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.147 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-01T09:38:20.146112) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.147 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7ff84c98b380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.147 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.147 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b3b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.147 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b3b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.147 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.148 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.148 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7ff84c98bb90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.148 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-01T09:38:20.147727) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.148 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.148 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bbc0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.149 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bbc0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.149 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.149 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-01T09:38:20.149111) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.149 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.149 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.150 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.150 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7ff84c98bbf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.150 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.150 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bc20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.150 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bc20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.150 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.150 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.150 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.151 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.151 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7ff84c98bc80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.151 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.151 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bcb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.151 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bcb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.152 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.152 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/network.outgoing.bytes volume: 2412 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.152 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/network.outgoing.bytes volume: 2426 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.152 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.152 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7ff84c98bd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.153 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-01T09:38:20.150598) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.153 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-01T09:38:20.152010) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.153 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.153 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bd40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.153 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bd40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.154 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.154 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-01T09:38:20.154011) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.154 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.154 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.155 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.155 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7ff84c98bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.155 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.155 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7ff84c98b5c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.155 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.155 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b5f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.155 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b5f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.155 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.155 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/memory.usage volume: 48.79296875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.155 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/memory.usage volume: 48.91796875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.156 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.156 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7ff84dc55040>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.156 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.156 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-01T09:38:20.155535) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.156 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b650>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.156 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b650>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.157 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.157 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-01T09:38:20.157076) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.157 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.157 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.157 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.158 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7ff84c98be30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.158 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.158 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98be60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.158 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98be60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.158 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.158 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/network.outgoing.packets volume: 24 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.158 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.159 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.159 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7ff8503b1b80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.159 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.159 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84f216690>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.159 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84f216690>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.159 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.159 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/cpu volume: 44350000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.160 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/cpu volume: 42490000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.160 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.160 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7ff84dab3f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.160 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.161 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-01T09:38:20.158421) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.161 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-01T09:38:20.159782) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.160 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c9896d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.161 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c9896d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.161 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.161 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-01T09:38:20.161360) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.161 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.162 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.162 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.162 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.162 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.162 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.163 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.163 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7ff84c98bec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.163 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.163 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bef0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.163 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bef0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.163 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.163 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.164 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.164 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.164 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-01T09:38:20.163668) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.164 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7ff84c98b170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.164 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.165 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98a720>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.165 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98a720>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.165 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.165 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.165 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.165 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-01T09:38:20.165214) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.166 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.166 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.166 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.166 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.167 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.167 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7ff84c98bf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.167 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.167 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bf80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.167 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bf80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.167 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.167 14 DEBUG ceilometer.compute.pollsters [-] 7ed22ffd-011d-48d7-962a-8606e471a59e/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.168 14 DEBUG ceilometer.compute.pollsters [-] 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.168 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.168 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-01T09:38:20.167563) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.168 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.169 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.169 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.169 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.170 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.170 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.170 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.170 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.170 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.170 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.171 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.171 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.171 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.171 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.171 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.171 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.172 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.172 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.172 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.172 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.172 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.172 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.172 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.173 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.173 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:38:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:38:20.173 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:38:21 compute-0 nova_compute[189491]: 2025-12-01 09:38:21.470 189495 DEBUG oslo_concurrency.lockutils [None req-c08e731d-a839-4573-8c01-70271334ab3a 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Acquiring lock "97dcaede-87ef-4c1c-a4a8-4ec9587cfe86" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:38:21 compute-0 nova_compute[189491]: 2025-12-01 09:38:21.470 189495 DEBUG oslo_concurrency.lockutils [None req-c08e731d-a839-4573-8c01-70271334ab3a 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lock "97dcaede-87ef-4c1c-a4a8-4ec9587cfe86" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:38:21 compute-0 nova_compute[189491]: 2025-12-01 09:38:21.471 189495 DEBUG oslo_concurrency.lockutils [None req-c08e731d-a839-4573-8c01-70271334ab3a 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Acquiring lock "97dcaede-87ef-4c1c-a4a8-4ec9587cfe86-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:38:21 compute-0 nova_compute[189491]: 2025-12-01 09:38:21.471 189495 DEBUG oslo_concurrency.lockutils [None req-c08e731d-a839-4573-8c01-70271334ab3a 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lock "97dcaede-87ef-4c1c-a4a8-4ec9587cfe86-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:38:21 compute-0 nova_compute[189491]: 2025-12-01 09:38:21.472 189495 DEBUG oslo_concurrency.lockutils [None req-c08e731d-a839-4573-8c01-70271334ab3a 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lock "97dcaede-87ef-4c1c-a4a8-4ec9587cfe86-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:38:21 compute-0 nova_compute[189491]: 2025-12-01 09:38:21.473 189495 INFO nova.compute.manager [None req-c08e731d-a839-4573-8c01-70271334ab3a 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86] Terminating instance#033[00m
Dec  1 09:38:21 compute-0 nova_compute[189491]: 2025-12-01 09:38:21.475 189495 DEBUG nova.compute.manager [None req-c08e731d-a839-4573-8c01-70271334ab3a 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  1 09:38:21 compute-0 kernel: tap609b09f2-6c (unregistering): left promiscuous mode
Dec  1 09:38:21 compute-0 NetworkManager[56318]: <info>  [1764581901.5203] device (tap609b09f2-6c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  1 09:38:21 compute-0 ovn_controller[97794]: 2025-12-01T09:38:21Z|00058|binding|INFO|Releasing lport 609b09f2-6c63-41e7-9850-15c0098f35b4 from this chassis (sb_readonly=0)
Dec  1 09:38:21 compute-0 nova_compute[189491]: 2025-12-01 09:38:21.534 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:38:21 compute-0 ovn_controller[97794]: 2025-12-01T09:38:21Z|00059|binding|INFO|Setting lport 609b09f2-6c63-41e7-9850-15c0098f35b4 down in Southbound
Dec  1 09:38:21 compute-0 ovn_controller[97794]: 2025-12-01T09:38:21Z|00060|binding|INFO|Removing iface tap609b09f2-6c ovn-installed in OVS
Dec  1 09:38:21 compute-0 nova_compute[189491]: 2025-12-01 09:38:21.537 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:38:21 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:38:21.544 106659 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:40:39:1e 192.168.0.18'], port_security=['fa:16:3e:40:39:1e 192.168.0.18'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-vdfkxa75cfa3-aohxquokylp7-2qxsn2rwux5j-port-smaxskxe3vm7', 'neutron:cidrs': '192.168.0.18/24', 'neutron:device_id': '97dcaede-87ef-4c1c-a4a8-4ec9587cfe86', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-52d15875-2a2e-463a-bc5d-8fa6b8466bff', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-vdfkxa75cfa3-aohxquokylp7-2qxsn2rwux5j-port-smaxskxe3vm7', 'neutron:project_id': 'fac95b8a995a4174bfa966a8d9d9aa01', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a5a5e6d4-6211-447f-b3f6-e2120ff69d87', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.213', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=260b7b6c-4405-41e2-9dc8-1595161adaf8, chassis=[], tunnel_key=7, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff7f12c3670>], logical_port=609b09f2-6c63-41e7-9850-15c0098f35b4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff7f12c3670>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 09:38:21 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:38:21.546 106659 INFO neutron.agent.ovn.metadata.agent [-] Port 609b09f2-6c63-41e7-9850-15c0098f35b4 in datapath 52d15875-2a2e-463a-bc5d-8fa6b8466bff unbound from our chassis#033[00m
Dec  1 09:38:21 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:38:21.547 106659 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 52d15875-2a2e-463a-bc5d-8fa6b8466bff#033[00m
Dec  1 09:38:21 compute-0 nova_compute[189491]: 2025-12-01 09:38:21.548 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:38:21 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:38:21.567 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[26d3488b-f253-4f33-bd6c-c113ca1f06c7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:38:21 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Deactivated successfully.
Dec  1 09:38:21 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Consumed 2min 8.238s CPU time.
Dec  1 09:38:21 compute-0 systemd-machined[155812]: Machine qemu-4-instance-00000004 terminated.
Dec  1 09:38:21 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:38:21.603 239843 DEBUG oslo.privsep.daemon [-] privsep: reply[2f290279-7c7f-407f-8663-60b6168c5964]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:38:21 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:38:21.607 239843 DEBUG oslo.privsep.daemon [-] privsep: reply[a6798de2-3e2b-41a0-a648-6c5f2ed90c70]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:38:21 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:38:21.639 239843 DEBUG oslo.privsep.daemon [-] privsep: reply[eabd149d-dee9-4829-96f2-5846b263b96c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:38:21 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:38:21.660 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[a400c960-47d9-4e6a-8e74-7cc600d96d6a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap52d15875-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d0:8c:a9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 15, 'rx_bytes': 616, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 15, 'rx_bytes': 616, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 383928, 'reachable_time': 26769, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 250084, 'error': None, 'target': 'ovnmeta-52d15875-2a2e-463a-bc5d-8fa6b8466bff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:38:21 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:38:21.686 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[57e4f27f-2c6d-4faf-a72b-e49837957655]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap52d15875-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 383943, 'tstamp': 383943}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 250085, 'error': None, 'target': 'ovnmeta-52d15875-2a2e-463a-bc5d-8fa6b8466bff', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap52d15875-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 383945, 'tstamp': 383945}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 250085, 'error': None, 'target': 'ovnmeta-52d15875-2a2e-463a-bc5d-8fa6b8466bff', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:38:21 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:38:21.690 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap52d15875-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:38:21 compute-0 nova_compute[189491]: 2025-12-01 09:38:21.693 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:38:21 compute-0 nova_compute[189491]: 2025-12-01 09:38:21.703 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:38:21 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:38:21.704 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap52d15875-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:38:21 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:38:21.705 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 09:38:21 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:38:21.706 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap52d15875-20, col_values=(('external_ids', {'iface-id': 'dbcd2eb8-9722-4ebb-b254-d57f599617d1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:38:21 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:38:21.706 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 09:38:21 compute-0 nova_compute[189491]: 2025-12-01 09:38:21.779 189495 DEBUG nova.compute.manager [req-50bb5fb7-2e22-49f5-b088-fdca03e5f2cb req-1b32cc03-38ea-4df7-93ab-95439937002f ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86] Received event network-vif-unplugged-609b09f2-6c63-41e7-9850-15c0098f35b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 09:38:21 compute-0 nova_compute[189491]: 2025-12-01 09:38:21.780 189495 DEBUG oslo_concurrency.lockutils [req-50bb5fb7-2e22-49f5-b088-fdca03e5f2cb req-1b32cc03-38ea-4df7-93ab-95439937002f ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Acquiring lock "97dcaede-87ef-4c1c-a4a8-4ec9587cfe86-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:38:21 compute-0 nova_compute[189491]: 2025-12-01 09:38:21.780 189495 DEBUG oslo_concurrency.lockutils [req-50bb5fb7-2e22-49f5-b088-fdca03e5f2cb req-1b32cc03-38ea-4df7-93ab-95439937002f ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Lock "97dcaede-87ef-4c1c-a4a8-4ec9587cfe86-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:38:21 compute-0 nova_compute[189491]: 2025-12-01 09:38:21.780 189495 DEBUG oslo_concurrency.lockutils [req-50bb5fb7-2e22-49f5-b088-fdca03e5f2cb req-1b32cc03-38ea-4df7-93ab-95439937002f ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Lock "97dcaede-87ef-4c1c-a4a8-4ec9587cfe86-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:38:21 compute-0 nova_compute[189491]: 2025-12-01 09:38:21.781 189495 DEBUG nova.compute.manager [req-50bb5fb7-2e22-49f5-b088-fdca03e5f2cb req-1b32cc03-38ea-4df7-93ab-95439937002f ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86] No waiting events found dispatching network-vif-unplugged-609b09f2-6c63-41e7-9850-15c0098f35b4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 09:38:21 compute-0 nova_compute[189491]: 2025-12-01 09:38:21.781 189495 DEBUG nova.compute.manager [req-50bb5fb7-2e22-49f5-b088-fdca03e5f2cb req-1b32cc03-38ea-4df7-93ab-95439937002f ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86] Received event network-vif-unplugged-609b09f2-6c63-41e7-9850-15c0098f35b4 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  1 09:38:21 compute-0 nova_compute[189491]: 2025-12-01 09:38:21.796 189495 INFO nova.virt.libvirt.driver [-] [instance: 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86] Instance destroyed successfully.#033[00m
Dec  1 09:38:21 compute-0 nova_compute[189491]: 2025-12-01 09:38:21.798 189495 DEBUG nova.objects.instance [None req-c08e731d-a839-4573-8c01-70271334ab3a 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lazy-loading 'resources' on Instance uuid 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 09:38:21 compute-0 nova_compute[189491]: 2025-12-01 09:38:21.814 189495 DEBUG nova.virt.libvirt.vif [None req-c08e731d-a839-4573-8c01-70271334ab3a 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-01T09:25:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='vn-a75cfa3-aohxquokylp7-2qxsn2rwux5j-vnf-gncrlbwrk3ge',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-a75cfa3-aohxquokylp7-2qxsn2rwux5j-vnf-gncrlbwrk3ge',id=4,image_ref='304c689d-2799-45ae-8166-517d5fd107b2',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-01T09:26:11Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='1555a697-b0aa-4429-98e7-26e6671e228d'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='fac95b8a995a4174bfa966a8d9d9aa01',ramdisk_id='',reservation_id='r-gcvg4l82',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,admin,reader',image_base_image_ref='304c689d-2799-45ae-8166-517d5fd107b2',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-01T09:26:11Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0yNTMxMjYzNzI1Nzc4NTIwOTkyPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTI1MzEyNjM3MjU3Nzg1MjA5OTI9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09MjUzMTI2MzcyNTc3ODUyMDk5Mj09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTI1MzEyNjM3MjU3Nzg1MjA5OTI9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0yNTMxMjYzNzI1Nzc4NTIwOTkyPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0yNTMxMjYzNzI1Nzc4NTIwOTkyPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKC
Dec  1 09:38:21 compute-0 nova_compute[189491]: Cclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09MjUzMTI2MzcyNTc3ODUyMDk5Mj09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTI1MzEyNjM3MjU3Nzg1MjA5OTI9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0yNTMxMjYzNzI1Nzc4NTIwOTkyPT0tLQo=',user_id='962a55152ff34fdda5eae1f8aee3a7a9',uuid=97dcaede-87ef-4c1c-a4a8-4ec9587cfe86,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "609b09f2-6c63-41e7-9850-15c0098f35b4", "address": "fa:16:3e:40:39:1e", "network": {"id": "52d15875-2a2e-463a-bc5d-8fa6b8466bff", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fac95b8a995a4174bfa966a8d9d9aa01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap609b09f2-6c", "ovs_interfaceid": "609b09f2-6c63-41e7-9850-15c0098f35b4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  1 09:38:21 compute-0 nova_compute[189491]: 2025-12-01 09:38:21.815 189495 DEBUG nova.network.os_vif_util [None req-c08e731d-a839-4573-8c01-70271334ab3a 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Converting VIF {"id": "609b09f2-6c63-41e7-9850-15c0098f35b4", "address": "fa:16:3e:40:39:1e", "network": {"id": "52d15875-2a2e-463a-bc5d-8fa6b8466bff", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fac95b8a995a4174bfa966a8d9d9aa01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap609b09f2-6c", "ovs_interfaceid": "609b09f2-6c63-41e7-9850-15c0098f35b4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 09:38:21 compute-0 nova_compute[189491]: 2025-12-01 09:38:21.816 189495 DEBUG nova.network.os_vif_util [None req-c08e731d-a839-4573-8c01-70271334ab3a 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:40:39:1e,bridge_name='br-int',has_traffic_filtering=True,id=609b09f2-6c63-41e7-9850-15c0098f35b4,network=Network(52d15875-2a2e-463a-bc5d-8fa6b8466bff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap609b09f2-6c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 09:38:21 compute-0 nova_compute[189491]: 2025-12-01 09:38:21.817 189495 DEBUG os_vif [None req-c08e731d-a839-4573-8c01-70271334ab3a 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:40:39:1e,bridge_name='br-int',has_traffic_filtering=True,id=609b09f2-6c63-41e7-9850-15c0098f35b4,network=Network(52d15875-2a2e-463a-bc5d-8fa6b8466bff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap609b09f2-6c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  1 09:38:21 compute-0 nova_compute[189491]: 2025-12-01 09:38:21.820 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:38:21 compute-0 nova_compute[189491]: 2025-12-01 09:38:21.820 189495 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap609b09f2-6c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:38:21 compute-0 nova_compute[189491]: 2025-12-01 09:38:21.824 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  1 09:38:21 compute-0 nova_compute[189491]: 2025-12-01 09:38:21.826 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:38:21 compute-0 nova_compute[189491]: 2025-12-01 09:38:21.829 189495 INFO os_vif [None req-c08e731d-a839-4573-8c01-70271334ab3a 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:40:39:1e,bridge_name='br-int',has_traffic_filtering=True,id=609b09f2-6c63-41e7-9850-15c0098f35b4,network=Network(52d15875-2a2e-463a-bc5d-8fa6b8466bff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap609b09f2-6c')#033[00m
Dec  1 09:38:21 compute-0 nova_compute[189491]: 2025-12-01 09:38:21.830 189495 INFO nova.virt.libvirt.driver [None req-c08e731d-a839-4573-8c01-70271334ab3a 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86] Deleting instance files /var/lib/nova/instances/97dcaede-87ef-4c1c-a4a8-4ec9587cfe86_del#033[00m
Dec  1 09:38:21 compute-0 nova_compute[189491]: 2025-12-01 09:38:21.830 189495 INFO nova.virt.libvirt.driver [None req-c08e731d-a839-4573-8c01-70271334ab3a 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86] Deletion of /var/lib/nova/instances/97dcaede-87ef-4c1c-a4a8-4ec9587cfe86_del complete#033[00m
Dec  1 09:38:21 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:38:21.872 106659 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:2b:76', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'f6:fe:a3:90:0a:20'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 09:38:21 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:38:21.873 106659 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  1 09:38:21 compute-0 nova_compute[189491]: 2025-12-01 09:38:21.874 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:38:21 compute-0 nova_compute[189491]: 2025-12-01 09:38:21.894 189495 INFO nova.compute.manager [None req-c08e731d-a839-4573-8c01-70271334ab3a 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86] Took 0.42 seconds to destroy the instance on the hypervisor.#033[00m
Dec  1 09:38:21 compute-0 nova_compute[189491]: 2025-12-01 09:38:21.895 189495 DEBUG oslo.service.loopingcall [None req-c08e731d-a839-4573-8c01-70271334ab3a 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  1 09:38:21 compute-0 nova_compute[189491]: 2025-12-01 09:38:21.895 189495 DEBUG nova.compute.manager [-] [instance: 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  1 09:38:21 compute-0 nova_compute[189491]: 2025-12-01 09:38:21.895 189495 DEBUG nova.network.neutron [-] [instance: 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  1 09:38:22 compute-0 rsyslogd[236849]: message too long (8192) with configured size 8096, begin of message is: 2025-12-01 09:38:21.814 189495 DEBUG nova.virt.libvirt.vif [None req-c08e731d-a8 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec  1 09:38:22 compute-0 nova_compute[189491]: 2025-12-01 09:38:22.914 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:38:23 compute-0 nova_compute[189491]: 2025-12-01 09:38:23.336 189495 DEBUG nova.network.neutron [-] [instance: 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 09:38:23 compute-0 nova_compute[189491]: 2025-12-01 09:38:23.357 189495 INFO nova.compute.manager [-] [instance: 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86] Took 1.46 seconds to deallocate network for instance.#033[00m
Dec  1 09:38:23 compute-0 nova_compute[189491]: 2025-12-01 09:38:23.397 189495 DEBUG oslo_concurrency.lockutils [None req-c08e731d-a839-4573-8c01-70271334ab3a 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:38:23 compute-0 nova_compute[189491]: 2025-12-01 09:38:23.398 189495 DEBUG oslo_concurrency.lockutils [None req-c08e731d-a839-4573-8c01-70271334ab3a 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:38:23 compute-0 nova_compute[189491]: 2025-12-01 09:38:23.482 189495 DEBUG nova.compute.provider_tree [None req-c08e731d-a839-4573-8c01-70271334ab3a 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Inventory has not changed in ProviderTree for provider: 143c7fe7-af1f-477a-978c-6a994d785d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 09:38:23 compute-0 nova_compute[189491]: 2025-12-01 09:38:23.499 189495 DEBUG nova.scheduler.client.report [None req-c08e731d-a839-4573-8c01-70271334ab3a 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Inventory has not changed for provider 143c7fe7-af1f-477a-978c-6a994d785d98 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 09:38:23 compute-0 nova_compute[189491]: 2025-12-01 09:38:23.520 189495 DEBUG oslo_concurrency.lockutils [None req-c08e731d-a839-4573-8c01-70271334ab3a 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.122s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:38:23 compute-0 nova_compute[189491]: 2025-12-01 09:38:23.552 189495 INFO nova.scheduler.client.report [None req-c08e731d-a839-4573-8c01-70271334ab3a 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Deleted allocations for instance 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86#033[00m
Dec  1 09:38:23 compute-0 nova_compute[189491]: 2025-12-01 09:38:23.608 189495 DEBUG oslo_concurrency.lockutils [None req-c08e731d-a839-4573-8c01-70271334ab3a 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lock "97dcaede-87ef-4c1c-a4a8-4ec9587cfe86" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.138s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:38:23 compute-0 nova_compute[189491]: 2025-12-01 09:38:23.861 189495 DEBUG nova.compute.manager [req-0e0d076f-9b92-44dc-9814-2c33bb080408 req-aee5c06a-92f4-4589-a44b-6790982b9145 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86] Received event network-vif-plugged-609b09f2-6c63-41e7-9850-15c0098f35b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 09:38:23 compute-0 nova_compute[189491]: 2025-12-01 09:38:23.863 189495 DEBUG oslo_concurrency.lockutils [req-0e0d076f-9b92-44dc-9814-2c33bb080408 req-aee5c06a-92f4-4589-a44b-6790982b9145 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Acquiring lock "97dcaede-87ef-4c1c-a4a8-4ec9587cfe86-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:38:23 compute-0 nova_compute[189491]: 2025-12-01 09:38:23.863 189495 DEBUG oslo_concurrency.lockutils [req-0e0d076f-9b92-44dc-9814-2c33bb080408 req-aee5c06a-92f4-4589-a44b-6790982b9145 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Lock "97dcaede-87ef-4c1c-a4a8-4ec9587cfe86-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:38:23 compute-0 nova_compute[189491]: 2025-12-01 09:38:23.864 189495 DEBUG oslo_concurrency.lockutils [req-0e0d076f-9b92-44dc-9814-2c33bb080408 req-aee5c06a-92f4-4589-a44b-6790982b9145 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Lock "97dcaede-87ef-4c1c-a4a8-4ec9587cfe86-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:38:23 compute-0 nova_compute[189491]: 2025-12-01 09:38:23.864 189495 DEBUG nova.compute.manager [req-0e0d076f-9b92-44dc-9814-2c33bb080408 req-aee5c06a-92f4-4589-a44b-6790982b9145 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86] No waiting events found dispatching network-vif-plugged-609b09f2-6c63-41e7-9850-15c0098f35b4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 09:38:23 compute-0 nova_compute[189491]: 2025-12-01 09:38:23.865 189495 WARNING nova.compute.manager [req-0e0d076f-9b92-44dc-9814-2c33bb080408 req-aee5c06a-92f4-4589-a44b-6790982b9145 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86] Received unexpected event network-vif-plugged-609b09f2-6c63-41e7-9850-15c0098f35b4 for instance with vm_state deleted and task_state None.#033[00m
Dec  1 09:38:23 compute-0 nova_compute[189491]: 2025-12-01 09:38:23.865 189495 DEBUG nova.compute.manager [req-0e0d076f-9b92-44dc-9814-2c33bb080408 req-aee5c06a-92f4-4589-a44b-6790982b9145 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86] Received event network-changed-609b09f2-6c63-41e7-9850-15c0098f35b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 09:38:23 compute-0 nova_compute[189491]: 2025-12-01 09:38:23.865 189495 DEBUG nova.compute.manager [req-0e0d076f-9b92-44dc-9814-2c33bb080408 req-aee5c06a-92f4-4589-a44b-6790982b9145 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86] Refreshing instance network info cache due to event network-changed-609b09f2-6c63-41e7-9850-15c0098f35b4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  1 09:38:23 compute-0 nova_compute[189491]: 2025-12-01 09:38:23.866 189495 DEBUG oslo_concurrency.lockutils [req-0e0d076f-9b92-44dc-9814-2c33bb080408 req-aee5c06a-92f4-4589-a44b-6790982b9145 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Acquiring lock "refresh_cache-97dcaede-87ef-4c1c-a4a8-4ec9587cfe86" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 09:38:23 compute-0 nova_compute[189491]: 2025-12-01 09:38:23.866 189495 DEBUG oslo_concurrency.lockutils [req-0e0d076f-9b92-44dc-9814-2c33bb080408 req-aee5c06a-92f4-4589-a44b-6790982b9145 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Acquired lock "refresh_cache-97dcaede-87ef-4c1c-a4a8-4ec9587cfe86" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 09:38:23 compute-0 nova_compute[189491]: 2025-12-01 09:38:23.867 189495 DEBUG nova.network.neutron [req-0e0d076f-9b92-44dc-9814-2c33bb080408 req-aee5c06a-92f4-4589-a44b-6790982b9145 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86] Refreshing network info cache for port 609b09f2-6c63-41e7-9850-15c0098f35b4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  1 09:38:23 compute-0 nova_compute[189491]: 2025-12-01 09:38:23.981 189495 DEBUG nova.network.neutron [req-0e0d076f-9b92-44dc-9814-2c33bb080408 req-aee5c06a-92f4-4589-a44b-6790982b9145 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  1 09:38:24 compute-0 nova_compute[189491]: 2025-12-01 09:38:24.458 189495 DEBUG nova.network.neutron [req-0e0d076f-9b92-44dc-9814-2c33bb080408 req-aee5c06a-92f4-4589-a44b-6790982b9145 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106#033[00m
Dec  1 09:38:24 compute-0 nova_compute[189491]: 2025-12-01 09:38:24.459 189495 DEBUG oslo_concurrency.lockutils [req-0e0d076f-9b92-44dc-9814-2c33bb080408 req-aee5c06a-92f4-4589-a44b-6790982b9145 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Releasing lock "refresh_cache-97dcaede-87ef-4c1c-a4a8-4ec9587cfe86" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 09:38:24 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:38:24.875 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=203a4433-d8f4-4d80-8084-548a6d57cd5d, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:38:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:38:26.527 106659 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:38:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:38:26.527 106659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:38:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:38:26.528 106659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:38:26 compute-0 podman[250103]: 2025-12-01 09:38:26.699948869 +0000 UTC m=+0.067354073 container health_status f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec  1 09:38:26 compute-0 podman[250102]: 2025-12-01 09:38:26.723510614 +0000 UTC m=+0.094855004 container health_status 110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, release=1755695350, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, architecture=x86_64, version=9.6, com.redhat.component=ubi9-minimal-container, config_id=edpm, name=ubi9-minimal)
Dec  1 09:38:26 compute-0 nova_compute[189491]: 2025-12-01 09:38:26.823 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:38:27 compute-0 nova_compute[189491]: 2025-12-01 09:38:27.915 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:38:29 compute-0 podman[203700]: time="2025-12-01T09:38:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 09:38:29 compute-0 podman[203700]: @ - - [01/Dec/2025:09:38:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Dec  1 09:38:29 compute-0 podman[203700]: @ - - [01/Dec/2025:09:38:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4807 "" "Go-http-client/1.1"
Dec  1 09:38:30 compute-0 podman[250142]: 2025-12-01 09:38:30.729143531 +0000 UTC m=+0.104719894 container health_status 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller)
Dec  1 09:38:30 compute-0 podman[250141]: 2025-12-01 09:38:30.736326056 +0000 UTC m=+0.113953199 container health_status 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  1 09:38:31 compute-0 openstack_network_exporter[205866]: ERROR   09:38:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:38:31 compute-0 openstack_network_exporter[205866]: ERROR   09:38:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:38:31 compute-0 openstack_network_exporter[205866]: ERROR   09:38:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 09:38:31 compute-0 openstack_network_exporter[205866]: ERROR   09:38:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 09:38:31 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:38:31 compute-0 openstack_network_exporter[205866]: ERROR   09:38:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 09:38:31 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:38:31 compute-0 nova_compute[189491]: 2025-12-01 09:38:31.825 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:38:32 compute-0 nova_compute[189491]: 2025-12-01 09:38:32.918 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:38:33 compute-0 nova_compute[189491]: 2025-12-01 09:38:33.741 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:38:33 compute-0 nova_compute[189491]: 2025-12-01 09:38:33.742 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 09:38:33 compute-0 nova_compute[189491]: 2025-12-01 09:38:33.742 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 09:38:34 compute-0 nova_compute[189491]: 2025-12-01 09:38:34.474 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "refresh_cache-7ed22ffd-011d-48d7-962a-8606e471a59e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 09:38:34 compute-0 nova_compute[189491]: 2025-12-01 09:38:34.474 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquired lock "refresh_cache-7ed22ffd-011d-48d7-962a-8606e471a59e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 09:38:34 compute-0 nova_compute[189491]: 2025-12-01 09:38:34.475 189495 DEBUG nova.network.neutron [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] [instance: 7ed22ffd-011d-48d7-962a-8606e471a59e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  1 09:38:34 compute-0 nova_compute[189491]: 2025-12-01 09:38:34.475 189495 DEBUG nova.objects.instance [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 7ed22ffd-011d-48d7-962a-8606e471a59e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 09:38:35 compute-0 nova_compute[189491]: 2025-12-01 09:38:35.916 189495 DEBUG nova.network.neutron [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] [instance: 7ed22ffd-011d-48d7-962a-8606e471a59e] Updating instance_info_cache with network_info: [{"id": "1632735e-15c5-4d6b-a450-baa001b88ac2", "address": "fa:16:3e:d4:bd:b4", "network": {"id": "52d15875-2a2e-463a-bc5d-8fa6b8466bff", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.55", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fac95b8a995a4174bfa966a8d9d9aa01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1632735e-15", "ovs_interfaceid": "1632735e-15c5-4d6b-a450-baa001b88ac2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 09:38:35 compute-0 nova_compute[189491]: 2025-12-01 09:38:35.931 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Releasing lock "refresh_cache-7ed22ffd-011d-48d7-962a-8606e471a59e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 09:38:35 compute-0 nova_compute[189491]: 2025-12-01 09:38:35.932 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] [instance: 7ed22ffd-011d-48d7-962a-8606e471a59e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  1 09:38:36 compute-0 nova_compute[189491]: 2025-12-01 09:38:36.792 189495 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764581901.7906656, 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 09:38:36 compute-0 nova_compute[189491]: 2025-12-01 09:38:36.793 189495 INFO nova.compute.manager [-] [instance: 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86] VM Stopped (Lifecycle Event)#033[00m
Dec  1 09:38:36 compute-0 nova_compute[189491]: 2025-12-01 09:38:36.823 189495 DEBUG nova.compute.manager [None req-23976001-8ddb-42cf-b39e-2e449b0bbadf - - - - - -] [instance: 97dcaede-87ef-4c1c-a4a8-4ec9587cfe86] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 09:38:36 compute-0 nova_compute[189491]: 2025-12-01 09:38:36.827 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:38:37 compute-0 nova_compute[189491]: 2025-12-01 09:38:37.713 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:38:37 compute-0 nova_compute[189491]: 2025-12-01 09:38:37.753 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:38:37 compute-0 nova_compute[189491]: 2025-12-01 09:38:37.754 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:38:37 compute-0 nova_compute[189491]: 2025-12-01 09:38:37.755 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:38:37 compute-0 nova_compute[189491]: 2025-12-01 09:38:37.755 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 09:38:37 compute-0 nova_compute[189491]: 2025-12-01 09:38:37.856 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:38:37 compute-0 nova_compute[189491]: 2025-12-01 09:38:37.919 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:38:37 compute-0 nova_compute[189491]: 2025-12-01 09:38:37.945 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:38:37 compute-0 nova_compute[189491]: 2025-12-01 09:38:37.946 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:38:38 compute-0 nova_compute[189491]: 2025-12-01 09:38:38.013 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:38:38 compute-0 nova_compute[189491]: 2025-12-01 09:38:38.014 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:38:38 compute-0 nova_compute[189491]: 2025-12-01 09:38:38.089 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk.eph0 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:38:38 compute-0 nova_compute[189491]: 2025-12-01 09:38:38.091 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:38:38 compute-0 nova_compute[189491]: 2025-12-01 09:38:38.160 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e/disk.eph0 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:38:38 compute-0 nova_compute[189491]: 2025-12-01 09:38:38.625 189495 WARNING nova.virt.libvirt.driver [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 09:38:38 compute-0 nova_compute[189491]: 2025-12-01 09:38:38.626 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5083MB free_disk=72.35742950439453GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 09:38:38 compute-0 nova_compute[189491]: 2025-12-01 09:38:38.626 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:38:38 compute-0 nova_compute[189491]: 2025-12-01 09:38:38.627 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:38:38 compute-0 nova_compute[189491]: 2025-12-01 09:38:38.804 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Instance 7ed22ffd-011d-48d7-962a-8606e471a59e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 09:38:38 compute-0 nova_compute[189491]: 2025-12-01 09:38:38.805 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 09:38:38 compute-0 nova_compute[189491]: 2025-12-01 09:38:38.806 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1024MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 09:38:38 compute-0 nova_compute[189491]: 2025-12-01 09:38:38.944 189495 DEBUG nova.compute.provider_tree [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Inventory has not changed in ProviderTree for provider: 143c7fe7-af1f-477a-978c-6a994d785d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 09:38:38 compute-0 nova_compute[189491]: 2025-12-01 09:38:38.961 189495 DEBUG nova.scheduler.client.report [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Inventory has not changed for provider 143c7fe7-af1f-477a-978c-6a994d785d98 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 09:38:38 compute-0 nova_compute[189491]: 2025-12-01 09:38:38.986 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 09:38:38 compute-0 nova_compute[189491]: 2025-12-01 09:38:38.987 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.360s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:38:39 compute-0 nova_compute[189491]: 2025-12-01 09:38:39.731 189495 DEBUG oslo_concurrency.lockutils [None req-f75058fb-6d76-4d32-9fbf-a253a9cd4b99 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Acquiring lock "7ed22ffd-011d-48d7-962a-8606e471a59e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:38:39 compute-0 nova_compute[189491]: 2025-12-01 09:38:39.731 189495 DEBUG oslo_concurrency.lockutils [None req-f75058fb-6d76-4d32-9fbf-a253a9cd4b99 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lock "7ed22ffd-011d-48d7-962a-8606e471a59e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:38:39 compute-0 nova_compute[189491]: 2025-12-01 09:38:39.732 189495 DEBUG oslo_concurrency.lockutils [None req-f75058fb-6d76-4d32-9fbf-a253a9cd4b99 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Acquiring lock "7ed22ffd-011d-48d7-962a-8606e471a59e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:38:39 compute-0 nova_compute[189491]: 2025-12-01 09:38:39.732 189495 DEBUG oslo_concurrency.lockutils [None req-f75058fb-6d76-4d32-9fbf-a253a9cd4b99 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lock "7ed22ffd-011d-48d7-962a-8606e471a59e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:38:39 compute-0 nova_compute[189491]: 2025-12-01 09:38:39.733 189495 DEBUG oslo_concurrency.lockutils [None req-f75058fb-6d76-4d32-9fbf-a253a9cd4b99 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lock "7ed22ffd-011d-48d7-962a-8606e471a59e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:38:39 compute-0 nova_compute[189491]: 2025-12-01 09:38:39.734 189495 INFO nova.compute.manager [None req-f75058fb-6d76-4d32-9fbf-a253a9cd4b99 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 7ed22ffd-011d-48d7-962a-8606e471a59e] Terminating instance#033[00m
Dec  1 09:38:39 compute-0 nova_compute[189491]: 2025-12-01 09:38:39.736 189495 DEBUG nova.compute.manager [None req-f75058fb-6d76-4d32-9fbf-a253a9cd4b99 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 7ed22ffd-011d-48d7-962a-8606e471a59e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  1 09:38:39 compute-0 kernel: tap1632735e-15 (unregistering): left promiscuous mode
Dec  1 09:38:39 compute-0 NetworkManager[56318]: <info>  [1764581919.7963] device (tap1632735e-15): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  1 09:38:39 compute-0 nova_compute[189491]: 2025-12-01 09:38:39.802 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:38:39 compute-0 ovn_controller[97794]: 2025-12-01T09:38:39Z|00061|binding|INFO|Releasing lport 1632735e-15c5-4d6b-a450-baa001b88ac2 from this chassis (sb_readonly=0)
Dec  1 09:38:39 compute-0 ovn_controller[97794]: 2025-12-01T09:38:39Z|00062|binding|INFO|Setting lport 1632735e-15c5-4d6b-a450-baa001b88ac2 down in Southbound
Dec  1 09:38:39 compute-0 ovn_controller[97794]: 2025-12-01T09:38:39Z|00063|binding|INFO|Removing iface tap1632735e-15 ovn-installed in OVS
Dec  1 09:38:39 compute-0 nova_compute[189491]: 2025-12-01 09:38:39.812 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:38:39 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:38:39.820 106659 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d4:bd:b4 192.168.0.55'], port_security=['fa:16:3e:d4:bd:b4 192.168.0.55'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '192.168.0.55/24', 'neutron:device_id': '7ed22ffd-011d-48d7-962a-8606e471a59e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-52d15875-2a2e-463a-bc5d-8fa6b8466bff', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fac95b8a995a4174bfa966a8d9d9aa01', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a5a5e6d4-6211-447f-b3f6-e2120ff69d87', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.225'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=260b7b6c-4405-41e2-9dc8-1595161adaf8, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff7f12c3670>], logical_port=1632735e-15c5-4d6b-a450-baa001b88ac2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff7f12c3670>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 09:38:39 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:38:39.821 106659 INFO neutron.agent.ovn.metadata.agent [-] Port 1632735e-15c5-4d6b-a450-baa001b88ac2 in datapath 52d15875-2a2e-463a-bc5d-8fa6b8466bff unbound from our chassis#033[00m
Dec  1 09:38:39 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:38:39.822 106659 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 52d15875-2a2e-463a-bc5d-8fa6b8466bff, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  1 09:38:39 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:38:39.823 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[85c95a21-bf0b-4000-b7a3-7f6d62d9bd2d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:38:39 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:38:39.825 106659 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-52d15875-2a2e-463a-bc5d-8fa6b8466bff namespace which is not needed anymore#033[00m
Dec  1 09:38:39 compute-0 nova_compute[189491]: 2025-12-01 09:38:39.833 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:38:39 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Deactivated successfully.
Dec  1 09:38:39 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Consumed 3min 26.663s CPU time.
Dec  1 09:38:39 compute-0 systemd-machined[155812]: Machine qemu-1-instance-00000001 terminated.
Dec  1 09:38:40 compute-0 neutron-haproxy-ovnmeta-52d15875-2a2e-463a-bc5d-8fa6b8466bff[239959]: [NOTICE]   (239963) : haproxy version is 2.8.14-c23fe91
Dec  1 09:38:40 compute-0 neutron-haproxy-ovnmeta-52d15875-2a2e-463a-bc5d-8fa6b8466bff[239959]: [NOTICE]   (239963) : path to executable is /usr/sbin/haproxy
Dec  1 09:38:40 compute-0 neutron-haproxy-ovnmeta-52d15875-2a2e-463a-bc5d-8fa6b8466bff[239959]: [WARNING]  (239963) : Exiting Master process...
Dec  1 09:38:40 compute-0 neutron-haproxy-ovnmeta-52d15875-2a2e-463a-bc5d-8fa6b8466bff[239959]: [WARNING]  (239963) : Exiting Master process...
Dec  1 09:38:40 compute-0 neutron-haproxy-ovnmeta-52d15875-2a2e-463a-bc5d-8fa6b8466bff[239959]: [ALERT]    (239963) : Current worker (239965) exited with code 143 (Terminated)
Dec  1 09:38:40 compute-0 neutron-haproxy-ovnmeta-52d15875-2a2e-463a-bc5d-8fa6b8466bff[239959]: [WARNING]  (239963) : All workers exited. Exiting... (0)
Dec  1 09:38:40 compute-0 systemd[1]: libpod-2f80b03765e40a4815a093c75ababa2ab21375fe8521715fb03f7313d6b1afa5.scope: Deactivated successfully.
Dec  1 09:38:40 compute-0 nova_compute[189491]: 2025-12-01 09:38:40.039 189495 INFO nova.virt.libvirt.driver [-] [instance: 7ed22ffd-011d-48d7-962a-8606e471a59e] Instance destroyed successfully.#033[00m
Dec  1 09:38:40 compute-0 podman[250219]: 2025-12-01 09:38:40.03980504 +0000 UTC m=+0.077428630 container died 2f80b03765e40a4815a093c75ababa2ab21375fe8521715fb03f7313d6b1afa5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-52d15875-2a2e-463a-bc5d-8fa6b8466bff, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0)
Dec  1 09:38:40 compute-0 nova_compute[189491]: 2025-12-01 09:38:40.039 189495 DEBUG nova.objects.instance [None req-f75058fb-6d76-4d32-9fbf-a253a9cd4b99 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lazy-loading 'resources' on Instance uuid 7ed22ffd-011d-48d7-962a-8606e471a59e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 09:38:40 compute-0 nova_compute[189491]: 2025-12-01 09:38:40.052 189495 DEBUG nova.virt.libvirt.vif [None req-f75058fb-6d76-4d32-9fbf-a253a9cd4b99 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-01T09:16:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='test_0',display_name='test_0',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='test-0',id=1,image_ref='304c689d-2799-45ae-8166-517d5fd107b2',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-01T09:16:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='fac95b8a995a4174bfa966a8d9d9aa01',ramdisk_id='',reservation_id='r-tw90szn6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,admin,reader',image_base_image_ref='304c689d-2799-45ae-8166-517d5fd107b2',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-01T09:16:36Z,user_data=None,user_id='962a55152ff34fdda5eae1f8aee3a7a9',uuid=7ed22ffd-011d-48d7-962a-8606e471a59e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1632735e-15c5-4d6b-a450-baa001b88ac2", "address": "fa:16:3e:d4:bd:b4", "network": {"id": "52d15875-2a2e-463a-bc5d-8fa6b8466bff", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.55", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fac95b8a995a4174bfa966a8d9d9aa01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1632735e-15", "ovs_interfaceid": "1632735e-15c5-4d6b-a450-baa001b88ac2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  1 09:38:40 compute-0 nova_compute[189491]: 2025-12-01 09:38:40.054 189495 DEBUG nova.network.os_vif_util [None req-f75058fb-6d76-4d32-9fbf-a253a9cd4b99 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Converting VIF {"id": "1632735e-15c5-4d6b-a450-baa001b88ac2", "address": "fa:16:3e:d4:bd:b4", "network": {"id": "52d15875-2a2e-463a-bc5d-8fa6b8466bff", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.55", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fac95b8a995a4174bfa966a8d9d9aa01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1632735e-15", "ovs_interfaceid": "1632735e-15c5-4d6b-a450-baa001b88ac2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 09:38:40 compute-0 nova_compute[189491]: 2025-12-01 09:38:40.055 189495 DEBUG nova.network.os_vif_util [None req-f75058fb-6d76-4d32-9fbf-a253a9cd4b99 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d4:bd:b4,bridge_name='br-int',has_traffic_filtering=True,id=1632735e-15c5-4d6b-a450-baa001b88ac2,network=Network(52d15875-2a2e-463a-bc5d-8fa6b8466bff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1632735e-15') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 09:38:40 compute-0 nova_compute[189491]: 2025-12-01 09:38:40.056 189495 DEBUG os_vif [None req-f75058fb-6d76-4d32-9fbf-a253a9cd4b99 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d4:bd:b4,bridge_name='br-int',has_traffic_filtering=True,id=1632735e-15c5-4d6b-a450-baa001b88ac2,network=Network(52d15875-2a2e-463a-bc5d-8fa6b8466bff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1632735e-15') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  1 09:38:40 compute-0 nova_compute[189491]: 2025-12-01 09:38:40.058 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:38:40 compute-0 nova_compute[189491]: 2025-12-01 09:38:40.059 189495 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1632735e-15, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:38:40 compute-0 nova_compute[189491]: 2025-12-01 09:38:40.061 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:38:40 compute-0 nova_compute[189491]: 2025-12-01 09:38:40.063 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:38:40 compute-0 nova_compute[189491]: 2025-12-01 09:38:40.070 189495 INFO os_vif [None req-f75058fb-6d76-4d32-9fbf-a253a9cd4b99 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d4:bd:b4,bridge_name='br-int',has_traffic_filtering=True,id=1632735e-15c5-4d6b-a450-baa001b88ac2,network=Network(52d15875-2a2e-463a-bc5d-8fa6b8466bff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1632735e-15')#033[00m
Dec  1 09:38:40 compute-0 nova_compute[189491]: 2025-12-01 09:38:40.071 189495 INFO nova.virt.libvirt.driver [None req-f75058fb-6d76-4d32-9fbf-a253a9cd4b99 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 7ed22ffd-011d-48d7-962a-8606e471a59e] Deleting instance files /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e_del#033[00m
Dec  1 09:38:40 compute-0 nova_compute[189491]: 2025-12-01 09:38:40.072 189495 INFO nova.virt.libvirt.driver [None req-f75058fb-6d76-4d32-9fbf-a253a9cd4b99 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 7ed22ffd-011d-48d7-962a-8606e471a59e] Deletion of /var/lib/nova/instances/7ed22ffd-011d-48d7-962a-8606e471a59e_del complete#033[00m
Dec  1 09:38:40 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2f80b03765e40a4815a093c75ababa2ab21375fe8521715fb03f7313d6b1afa5-userdata-shm.mount: Deactivated successfully.
Dec  1 09:38:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-3334922e409c57716a880baf1b1202bda6449b513322f5e2d0b0edc6459fb31e-merged.mount: Deactivated successfully.
Dec  1 09:38:40 compute-0 podman[250219]: 2025-12-01 09:38:40.090045954 +0000 UTC m=+0.127669544 container cleanup 2f80b03765e40a4815a093c75ababa2ab21375fe8521715fb03f7313d6b1afa5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-52d15875-2a2e-463a-bc5d-8fa6b8466bff, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 09:38:40 compute-0 systemd[1]: libpod-conmon-2f80b03765e40a4815a093c75ababa2ab21375fe8521715fb03f7313d6b1afa5.scope: Deactivated successfully.
Dec  1 09:38:40 compute-0 nova_compute[189491]: 2025-12-01 09:38:40.128 189495 INFO nova.compute.manager [None req-f75058fb-6d76-4d32-9fbf-a253a9cd4b99 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] [instance: 7ed22ffd-011d-48d7-962a-8606e471a59e] Took 0.39 seconds to destroy the instance on the hypervisor.#033[00m
Dec  1 09:38:40 compute-0 nova_compute[189491]: 2025-12-01 09:38:40.128 189495 DEBUG oslo.service.loopingcall [None req-f75058fb-6d76-4d32-9fbf-a253a9cd4b99 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  1 09:38:40 compute-0 nova_compute[189491]: 2025-12-01 09:38:40.129 189495 DEBUG nova.compute.manager [-] [instance: 7ed22ffd-011d-48d7-962a-8606e471a59e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  1 09:38:40 compute-0 nova_compute[189491]: 2025-12-01 09:38:40.129 189495 DEBUG nova.network.neutron [-] [instance: 7ed22ffd-011d-48d7-962a-8606e471a59e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  1 09:38:40 compute-0 podman[250269]: 2025-12-01 09:38:40.172827364 +0000 UTC m=+0.055169527 container remove 2f80b03765e40a4815a093c75ababa2ab21375fe8521715fb03f7313d6b1afa5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-52d15875-2a2e-463a-bc5d-8fa6b8466bff, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec  1 09:38:40 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:38:40.180 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[fbecb5d5-992e-4bac-b61d-fca15456af5a]: (4, ('Mon Dec  1 09:38:39 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-52d15875-2a2e-463a-bc5d-8fa6b8466bff (2f80b03765e40a4815a093c75ababa2ab21375fe8521715fb03f7313d6b1afa5)\n2f80b03765e40a4815a093c75ababa2ab21375fe8521715fb03f7313d6b1afa5\nMon Dec  1 09:38:40 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-52d15875-2a2e-463a-bc5d-8fa6b8466bff (2f80b03765e40a4815a093c75ababa2ab21375fe8521715fb03f7313d6b1afa5)\n2f80b03765e40a4815a093c75ababa2ab21375fe8521715fb03f7313d6b1afa5\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:38:40 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:38:40.182 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[bf29fef2-b3ed-4512-be27-3b6e343f550b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:38:40 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:38:40.183 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap52d15875-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:38:40 compute-0 nova_compute[189491]: 2025-12-01 09:38:40.185 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:38:40 compute-0 kernel: tap52d15875-20: left promiscuous mode
Dec  1 09:38:40 compute-0 nova_compute[189491]: 2025-12-01 09:38:40.188 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:38:40 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:38:40.192 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[ad992912-266f-4791-b4e4-4e50618ec9b3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:38:40 compute-0 nova_compute[189491]: 2025-12-01 09:38:40.204 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:38:40 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:38:40.213 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[5bcc5624-e49c-40d3-8d51-b9ace77c3356]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:38:40 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:38:40.214 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[4fb2fa53-74b9-49ff-9dee-120cb4b904f4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:38:40 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:38:40.234 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[0ac719b1-c7c6-48e2-b4af-2c8d96511790]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 383912, 'reachable_time': 17610, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 250283, 'error': None, 'target': 'ovnmeta-52d15875-2a2e-463a-bc5d-8fa6b8466bff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:38:40 compute-0 systemd[1]: run-netns-ovnmeta\x2d52d15875\x2d2a2e\x2d463a\x2dbc5d\x2d8fa6b8466bff.mount: Deactivated successfully.
Dec  1 09:38:40 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:38:40.250 106797 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-52d15875-2a2e-463a-bc5d-8fa6b8466bff deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  1 09:38:40 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:38:40.251 106797 DEBUG oslo.privsep.daemon [-] privsep: reply[b9514278-039f-426e-b80d-0aaf7f322f88]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:38:40 compute-0 nova_compute[189491]: 2025-12-01 09:38:40.659 189495 DEBUG nova.compute.manager [req-4dc15b8a-d978-4444-bafb-929fcd6641e1 req-4bca3935-3772-4757-9207-973a6b81d462 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 7ed22ffd-011d-48d7-962a-8606e471a59e] Received event network-vif-unplugged-1632735e-15c5-4d6b-a450-baa001b88ac2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 09:38:40 compute-0 nova_compute[189491]: 2025-12-01 09:38:40.660 189495 DEBUG oslo_concurrency.lockutils [req-4dc15b8a-d978-4444-bafb-929fcd6641e1 req-4bca3935-3772-4757-9207-973a6b81d462 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Acquiring lock "7ed22ffd-011d-48d7-962a-8606e471a59e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:38:40 compute-0 nova_compute[189491]: 2025-12-01 09:38:40.661 189495 DEBUG oslo_concurrency.lockutils [req-4dc15b8a-d978-4444-bafb-929fcd6641e1 req-4bca3935-3772-4757-9207-973a6b81d462 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Lock "7ed22ffd-011d-48d7-962a-8606e471a59e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:38:40 compute-0 nova_compute[189491]: 2025-12-01 09:38:40.661 189495 DEBUG oslo_concurrency.lockutils [req-4dc15b8a-d978-4444-bafb-929fcd6641e1 req-4bca3935-3772-4757-9207-973a6b81d462 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Lock "7ed22ffd-011d-48d7-962a-8606e471a59e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:38:40 compute-0 nova_compute[189491]: 2025-12-01 09:38:40.662 189495 DEBUG nova.compute.manager [req-4dc15b8a-d978-4444-bafb-929fcd6641e1 req-4bca3935-3772-4757-9207-973a6b81d462 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 7ed22ffd-011d-48d7-962a-8606e471a59e] No waiting events found dispatching network-vif-unplugged-1632735e-15c5-4d6b-a450-baa001b88ac2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 09:38:40 compute-0 nova_compute[189491]: 2025-12-01 09:38:40.663 189495 DEBUG nova.compute.manager [req-4dc15b8a-d978-4444-bafb-929fcd6641e1 req-4bca3935-3772-4757-9207-973a6b81d462 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 7ed22ffd-011d-48d7-962a-8606e471a59e] Received event network-vif-unplugged-1632735e-15c5-4d6b-a450-baa001b88ac2 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  1 09:38:41 compute-0 nova_compute[189491]: 2025-12-01 09:38:41.080 189495 DEBUG nova.network.neutron [-] [instance: 7ed22ffd-011d-48d7-962a-8606e471a59e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 09:38:41 compute-0 nova_compute[189491]: 2025-12-01 09:38:41.096 189495 INFO nova.compute.manager [-] [instance: 7ed22ffd-011d-48d7-962a-8606e471a59e] Took 0.97 seconds to deallocate network for instance.#033[00m
Dec  1 09:38:41 compute-0 nova_compute[189491]: 2025-12-01 09:38:41.140 189495 DEBUG oslo_concurrency.lockutils [None req-f75058fb-6d76-4d32-9fbf-a253a9cd4b99 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:38:41 compute-0 nova_compute[189491]: 2025-12-01 09:38:41.141 189495 DEBUG oslo_concurrency.lockutils [None req-f75058fb-6d76-4d32-9fbf-a253a9cd4b99 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:38:41 compute-0 nova_compute[189491]: 2025-12-01 09:38:41.209 189495 DEBUG nova.compute.provider_tree [None req-f75058fb-6d76-4d32-9fbf-a253a9cd4b99 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Inventory has not changed in ProviderTree for provider: 143c7fe7-af1f-477a-978c-6a994d785d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 09:38:41 compute-0 nova_compute[189491]: 2025-12-01 09:38:41.232 189495 DEBUG nova.scheduler.client.report [None req-f75058fb-6d76-4d32-9fbf-a253a9cd4b99 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Inventory has not changed for provider 143c7fe7-af1f-477a-978c-6a994d785d98 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 09:38:41 compute-0 nova_compute[189491]: 2025-12-01 09:38:41.261 189495 DEBUG oslo_concurrency.lockutils [None req-f75058fb-6d76-4d32-9fbf-a253a9cd4b99 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.119s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:38:41 compute-0 nova_compute[189491]: 2025-12-01 09:38:41.293 189495 INFO nova.scheduler.client.report [None req-f75058fb-6d76-4d32-9fbf-a253a9cd4b99 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Deleted allocations for instance 7ed22ffd-011d-48d7-962a-8606e471a59e#033[00m
Dec  1 09:38:41 compute-0 nova_compute[189491]: 2025-12-01 09:38:41.369 189495 DEBUG oslo_concurrency.lockutils [None req-f75058fb-6d76-4d32-9fbf-a253a9cd4b99 962a55152ff34fdda5eae1f8aee3a7a9 fac95b8a995a4174bfa966a8d9d9aa01 - - default default] Lock "7ed22ffd-011d-48d7-962a-8606e471a59e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.637s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:38:41 compute-0 podman[250285]: 2025-12-01 09:38:41.714550138 +0000 UTC m=+0.081517198 container health_status 6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  1 09:38:41 compute-0 podman[250286]: 2025-12-01 09:38:41.733927011 +0000 UTC m=+0.100324668 container health_status ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.build-date=20251125)
Dec  1 09:38:41 compute-0 nova_compute[189491]: 2025-12-01 09:38:41.988 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:38:41 compute-0 nova_compute[189491]: 2025-12-01 09:38:41.988 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:38:42 compute-0 nova_compute[189491]: 2025-12-01 09:38:42.714 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:38:42 compute-0 nova_compute[189491]: 2025-12-01 09:38:42.733 189495 DEBUG nova.compute.manager [req-41ecc0c7-8ed3-4b9e-a87a-23b018793dfa req-0850e3ad-cb42-4793-8ca1-3dbb0304b6a5 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 7ed22ffd-011d-48d7-962a-8606e471a59e] Received event network-vif-plugged-1632735e-15c5-4d6b-a450-baa001b88ac2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 09:38:42 compute-0 nova_compute[189491]: 2025-12-01 09:38:42.734 189495 DEBUG oslo_concurrency.lockutils [req-41ecc0c7-8ed3-4b9e-a87a-23b018793dfa req-0850e3ad-cb42-4793-8ca1-3dbb0304b6a5 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Acquiring lock "7ed22ffd-011d-48d7-962a-8606e471a59e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:38:42 compute-0 nova_compute[189491]: 2025-12-01 09:38:42.734 189495 DEBUG oslo_concurrency.lockutils [req-41ecc0c7-8ed3-4b9e-a87a-23b018793dfa req-0850e3ad-cb42-4793-8ca1-3dbb0304b6a5 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Lock "7ed22ffd-011d-48d7-962a-8606e471a59e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:38:42 compute-0 nova_compute[189491]: 2025-12-01 09:38:42.735 189495 DEBUG oslo_concurrency.lockutils [req-41ecc0c7-8ed3-4b9e-a87a-23b018793dfa req-0850e3ad-cb42-4793-8ca1-3dbb0304b6a5 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Lock "7ed22ffd-011d-48d7-962a-8606e471a59e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:38:42 compute-0 nova_compute[189491]: 2025-12-01 09:38:42.735 189495 DEBUG nova.compute.manager [req-41ecc0c7-8ed3-4b9e-a87a-23b018793dfa req-0850e3ad-cb42-4793-8ca1-3dbb0304b6a5 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 7ed22ffd-011d-48d7-962a-8606e471a59e] No waiting events found dispatching network-vif-plugged-1632735e-15c5-4d6b-a450-baa001b88ac2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 09:38:42 compute-0 nova_compute[189491]: 2025-12-01 09:38:42.736 189495 WARNING nova.compute.manager [req-41ecc0c7-8ed3-4b9e-a87a-23b018793dfa req-0850e3ad-cb42-4793-8ca1-3dbb0304b6a5 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 7ed22ffd-011d-48d7-962a-8606e471a59e] Received unexpected event network-vif-plugged-1632735e-15c5-4d6b-a450-baa001b88ac2 for instance with vm_state deleted and task_state None.#033[00m
Dec  1 09:38:42 compute-0 nova_compute[189491]: 2025-12-01 09:38:42.736 189495 DEBUG nova.compute.manager [req-41ecc0c7-8ed3-4b9e-a87a-23b018793dfa req-0850e3ad-cb42-4793-8ca1-3dbb0304b6a5 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 7ed22ffd-011d-48d7-962a-8606e471a59e] Received event network-vif-deleted-1632735e-15c5-4d6b-a450-baa001b88ac2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 09:38:42 compute-0 nova_compute[189491]: 2025-12-01 09:38:42.922 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:38:44 compute-0 nova_compute[189491]: 2025-12-01 09:38:44.709 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:38:45 compute-0 nova_compute[189491]: 2025-12-01 09:38:45.063 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:38:45 compute-0 nova_compute[189491]: 2025-12-01 09:38:45.714 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:38:45 compute-0 nova_compute[189491]: 2025-12-01 09:38:45.715 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:38:45 compute-0 nova_compute[189491]: 2025-12-01 09:38:45.716 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 09:38:46 compute-0 nova_compute[189491]: 2025-12-01 09:38:46.715 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:38:47 compute-0 nova_compute[189491]: 2025-12-01 09:38:47.925 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:38:48 compute-0 podman[250325]: 2025-12-01 09:38:48.697092746 +0000 UTC m=+0.074320753 container health_status e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 09:38:50 compute-0 nova_compute[189491]: 2025-12-01 09:38:50.068 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:38:50 compute-0 podman[250346]: 2025-12-01 09:38:50.692466013 +0000 UTC m=+0.065112150 container health_status dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  1 09:38:50 compute-0 podman[250347]: 2025-12-01 09:38:50.6997577 +0000 UTC m=+0.068675365 container health_status f2d8639519b361376780abb83cb5a5e341c4f412720fb5852b2a4d261a69c359 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, config_id=edpm, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, version=9.4, distribution-scope=public, io.openshift.tags=base rhel9, vcs-type=git, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., release=1214.1726694543, io.buildah.version=1.29.0, release-0.7.12=, io.openshift.expose-services=, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc.)
Dec  1 09:38:52 compute-0 nova_compute[189491]: 2025-12-01 09:38:52.928 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:38:55 compute-0 nova_compute[189491]: 2025-12-01 09:38:55.035 189495 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764581920.032148, 7ed22ffd-011d-48d7-962a-8606e471a59e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 09:38:55 compute-0 nova_compute[189491]: 2025-12-01 09:38:55.036 189495 INFO nova.compute.manager [-] [instance: 7ed22ffd-011d-48d7-962a-8606e471a59e] VM Stopped (Lifecycle Event)#033[00m
Dec  1 09:38:55 compute-0 nova_compute[189491]: 2025-12-01 09:38:55.058 189495 DEBUG nova.compute.manager [None req-55ecfd5b-e456-4e0e-a1d0-3b2158f8d92f - - - - - -] [instance: 7ed22ffd-011d-48d7-962a-8606e471a59e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 09:38:55 compute-0 nova_compute[189491]: 2025-12-01 09:38:55.073 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:38:57 compute-0 podman[250390]: 2025-12-01 09:38:57.71556834 +0000 UTC m=+0.083329063 container health_status 110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, distribution-scope=public, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, vcs-type=git, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  1 09:38:57 compute-0 podman[250391]: 2025-12-01 09:38:57.721595597 +0000 UTC m=+0.072892848 container health_status f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Dec  1 09:38:57 compute-0 nova_compute[189491]: 2025-12-01 09:38:57.931 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:38:59 compute-0 podman[203700]: time="2025-12-01T09:38:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 09:38:59 compute-0 podman[203700]: @ - - [01/Dec/2025:09:38:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28292 "" "Go-http-client/1.1"
Dec  1 09:38:59 compute-0 podman[203700]: @ - - [01/Dec/2025:09:38:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4344 "" "Go-http-client/1.1"
Dec  1 09:39:00 compute-0 nova_compute[189491]: 2025-12-01 09:39:00.078 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:39:01 compute-0 openstack_network_exporter[205866]: ERROR   09:39:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:39:01 compute-0 openstack_network_exporter[205866]: ERROR   09:39:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:39:01 compute-0 openstack_network_exporter[205866]: ERROR   09:39:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 09:39:01 compute-0 openstack_network_exporter[205866]: ERROR   09:39:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 09:39:01 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:39:01 compute-0 openstack_network_exporter[205866]: ERROR   09:39:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 09:39:01 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:39:01 compute-0 podman[250427]: 2025-12-01 09:39:01.732456721 +0000 UTC m=+0.104593771 container health_status 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_managed=true)
Dec  1 09:39:01 compute-0 podman[250428]: 2025-12-01 09:39:01.739296448 +0000 UTC m=+0.107892232 container health_status 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Dec  1 09:39:02 compute-0 nova_compute[189491]: 2025-12-01 09:39:02.935 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:39:05 compute-0 nova_compute[189491]: 2025-12-01 09:39:05.081 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:39:07 compute-0 nova_compute[189491]: 2025-12-01 09:39:07.937 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:39:10 compute-0 nova_compute[189491]: 2025-12-01 09:39:10.085 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:39:11 compute-0 ovn_controller[97794]: 2025-12-01T09:39:11Z|00064|memory_trim|INFO|Detected inactivity (last active 30010 ms ago): trimming memory
Dec  1 09:39:12 compute-0 podman[250471]: 2025-12-01 09:39:12.71347221 +0000 UTC m=+0.087437603 container health_status 6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  1 09:39:12 compute-0 podman[250472]: 2025-12-01 09:39:12.728570368 +0000 UTC m=+0.088030687 container health_status ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute)
Dec  1 09:39:12 compute-0 nova_compute[189491]: 2025-12-01 09:39:12.940 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:39:15 compute-0 nova_compute[189491]: 2025-12-01 09:39:15.090 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:39:17 compute-0 nova_compute[189491]: 2025-12-01 09:39:17.943 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:39:19 compute-0 podman[250512]: 2025-12-01 09:39:19.704255789 +0000 UTC m=+0.080035534 container health_status e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec  1 09:39:20 compute-0 nova_compute[189491]: 2025-12-01 09:39:20.093 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:39:21 compute-0 podman[250532]: 2025-12-01 09:39:21.727640508 +0000 UTC m=+0.100107233 container health_status dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  1 09:39:21 compute-0 podman[250533]: 2025-12-01 09:39:21.727612327 +0000 UTC m=+0.084037260 container health_status f2d8639519b361376780abb83cb5a5e341c4f412720fb5852b2a4d261a69c359 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release-0.7.12=, io.openshift.expose-services=, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, name=ubi9, architecture=x86_64, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, release=1214.1726694543, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  1 09:39:22 compute-0 nova_compute[189491]: 2025-12-01 09:39:22.944 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:39:25 compute-0 nova_compute[189491]: 2025-12-01 09:39:25.097 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:39:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:39:26.528 106659 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:39:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:39:26.529 106659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:39:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:39:26.529 106659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:39:27 compute-0 nova_compute[189491]: 2025-12-01 09:39:27.947 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:39:28 compute-0 podman[250575]: 2025-12-01 09:39:28.733918783 +0000 UTC m=+0.091426901 container health_status f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec  1 09:39:28 compute-0 podman[250574]: 2025-12-01 09:39:28.74567688 +0000 UTC m=+0.112468884 container health_status 110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, name=ubi9-minimal, io.openshift.expose-services=, version=9.6, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, vcs-type=git, architecture=x86_64, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Dec  1 09:39:29 compute-0 podman[203700]: time="2025-12-01T09:39:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 09:39:29 compute-0 podman[203700]: @ - - [01/Dec/2025:09:39:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28292 "" "Go-http-client/1.1"
Dec  1 09:39:29 compute-0 podman[203700]: @ - - [01/Dec/2025:09:39:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4336 "" "Go-http-client/1.1"
Dec  1 09:39:30 compute-0 nova_compute[189491]: 2025-12-01 09:39:30.101 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:39:31 compute-0 openstack_network_exporter[205866]: ERROR   09:39:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 09:39:31 compute-0 openstack_network_exporter[205866]: ERROR   09:39:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:39:31 compute-0 openstack_network_exporter[205866]: ERROR   09:39:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:39:31 compute-0 openstack_network_exporter[205866]: ERROR   09:39:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 09:39:31 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:39:31 compute-0 openstack_network_exporter[205866]: ERROR   09:39:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 09:39:31 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:39:32 compute-0 podman[250612]: 2025-12-01 09:39:32.70310594 +0000 UTC m=+0.082915873 container health_status 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Dec  1 09:39:32 compute-0 podman[250613]: 2025-12-01 09:39:32.726819918 +0000 UTC m=+0.102376477 container health_status 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec  1 09:39:32 compute-0 nova_compute[189491]: 2025-12-01 09:39:32.952 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:39:35 compute-0 nova_compute[189491]: 2025-12-01 09:39:35.107 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:39:35 compute-0 nova_compute[189491]: 2025-12-01 09:39:35.715 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:39:35 compute-0 nova_compute[189491]: 2025-12-01 09:39:35.716 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 09:39:35 compute-0 nova_compute[189491]: 2025-12-01 09:39:35.720 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 09:39:35 compute-0 nova_compute[189491]: 2025-12-01 09:39:35.739 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  1 09:39:37 compute-0 nova_compute[189491]: 2025-12-01 09:39:37.953 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:39:38 compute-0 nova_compute[189491]: 2025-12-01 09:39:38.714 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:39:38 compute-0 nova_compute[189491]: 2025-12-01 09:39:38.741 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:39:38 compute-0 nova_compute[189491]: 2025-12-01 09:39:38.742 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:39:38 compute-0 nova_compute[189491]: 2025-12-01 09:39:38.742 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:39:38 compute-0 nova_compute[189491]: 2025-12-01 09:39:38.742 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 09:39:39 compute-0 nova_compute[189491]: 2025-12-01 09:39:39.109 189495 WARNING nova.virt.libvirt.driver [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 09:39:39 compute-0 nova_compute[189491]: 2025-12-01 09:39:39.110 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5371MB free_disk=72.37950134277344GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 09:39:39 compute-0 nova_compute[189491]: 2025-12-01 09:39:39.110 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:39:39 compute-0 nova_compute[189491]: 2025-12-01 09:39:39.111 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:39:39 compute-0 nova_compute[189491]: 2025-12-01 09:39:39.302 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 09:39:39 compute-0 nova_compute[189491]: 2025-12-01 09:39:39.303 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 09:39:39 compute-0 nova_compute[189491]: 2025-12-01 09:39:39.330 189495 DEBUG nova.compute.provider_tree [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Inventory has not changed in ProviderTree for provider: 143c7fe7-af1f-477a-978c-6a994d785d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 09:39:39 compute-0 nova_compute[189491]: 2025-12-01 09:39:39.347 189495 DEBUG nova.scheduler.client.report [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Inventory has not changed for provider 143c7fe7-af1f-477a-978c-6a994d785d98 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 09:39:39 compute-0 nova_compute[189491]: 2025-12-01 09:39:39.371 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 09:39:39 compute-0 nova_compute[189491]: 2025-12-01 09:39:39.372 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.261s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:39:40 compute-0 nova_compute[189491]: 2025-12-01 09:39:40.112 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:39:41 compute-0 nova_compute[189491]: 2025-12-01 09:39:41.374 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:39:42 compute-0 nova_compute[189491]: 2025-12-01 09:39:42.714 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:39:42 compute-0 nova_compute[189491]: 2025-12-01 09:39:42.958 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:39:43 compute-0 nova_compute[189491]: 2025-12-01 09:39:43.713 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:39:43 compute-0 podman[250659]: 2025-12-01 09:39:43.721355949 +0000 UTC m=+0.079982641 container health_status 6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  1 09:39:43 compute-0 podman[250660]: 2025-12-01 09:39:43.732797168 +0000 UTC m=+0.104448749 container health_status ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.build-date=20251125, tcib_managed=true)
Dec  1 09:39:45 compute-0 nova_compute[189491]: 2025-12-01 09:39:45.118 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:39:45 compute-0 nova_compute[189491]: 2025-12-01 09:39:45.709 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:39:45 compute-0 nova_compute[189491]: 2025-12-01 09:39:45.731 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:39:45 compute-0 nova_compute[189491]: 2025-12-01 09:39:45.731 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 09:39:46 compute-0 nova_compute[189491]: 2025-12-01 09:39:46.715 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:39:46 compute-0 nova_compute[189491]: 2025-12-01 09:39:46.715 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:39:47 compute-0 nova_compute[189491]: 2025-12-01 09:39:47.713 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:39:47 compute-0 nova_compute[189491]: 2025-12-01 09:39:47.959 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:39:50 compute-0 nova_compute[189491]: 2025-12-01 09:39:50.122 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:39:50 compute-0 podman[250702]: 2025-12-01 09:39:50.697822644 +0000 UTC m=+0.070723852 container health_status e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251125)
Dec  1 09:39:52 compute-0 podman[250723]: 2025-12-01 09:39:52.687217757 +0000 UTC m=+0.067182613 container health_status dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  1 09:39:52 compute-0 podman[250724]: 2025-12-01 09:39:52.731161442 +0000 UTC m=+0.104827411 container health_status f2d8639519b361376780abb83cb5a5e341c4f412720fb5852b2a4d261a69c359 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543, io.buildah.version=1.29.0, vcs-type=git, distribution-scope=public, version=9.4, name=ubi9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, config_id=edpm, managed_by=edpm_ansible)
Dec  1 09:39:52 compute-0 nova_compute[189491]: 2025-12-01 09:39:52.960 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:39:55 compute-0 nova_compute[189491]: 2025-12-01 09:39:55.127 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:39:57 compute-0 nova_compute[189491]: 2025-12-01 09:39:57.968 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:39:59 compute-0 podman[250766]: 2025-12-01 09:39:59.733322379 +0000 UTC m=+0.098703138 container health_status 110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, container_name=openstack_network_exporter, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, release=1755695350, version=9.6, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64)
Dec  1 09:39:59 compute-0 podman[250767]: 2025-12-01 09:39:59.742885658 +0000 UTC m=+0.101381056 container health_status f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Dec  1 09:39:59 compute-0 podman[203700]: time="2025-12-01T09:39:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 09:39:59 compute-0 podman[203700]: @ - - [01/Dec/2025:09:39:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28292 "" "Go-http-client/1.1"
Dec  1 09:39:59 compute-0 podman[203700]: @ - - [01/Dec/2025:09:39:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4345 "" "Go-http-client/1.1"
Dec  1 09:40:00 compute-0 nova_compute[189491]: 2025-12-01 09:40:00.134 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:40:01 compute-0 openstack_network_exporter[205866]: ERROR   09:40:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:40:01 compute-0 openstack_network_exporter[205866]: ERROR   09:40:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:40:01 compute-0 openstack_network_exporter[205866]: ERROR   09:40:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 09:40:01 compute-0 openstack_network_exporter[205866]: ERROR   09:40:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 09:40:01 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:40:01 compute-0 openstack_network_exporter[205866]: ERROR   09:40:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 09:40:01 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:40:02 compute-0 nova_compute[189491]: 2025-12-01 09:40:02.975 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:40:03 compute-0 podman[250803]: 2025-12-01 09:40:03.697392801 +0000 UTC m=+0.070448115 container health_status 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd)
Dec  1 09:40:03 compute-0 podman[250804]: 2025-12-01 09:40:03.750282238 +0000 UTC m=+0.115488317 container health_status 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  1 09:40:05 compute-0 nova_compute[189491]: 2025-12-01 09:40:05.137 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:40:07 compute-0 nova_compute[189491]: 2025-12-01 09:40:07.979 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:40:10 compute-0 nova_compute[189491]: 2025-12-01 09:40:10.142 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:40:12 compute-0 nova_compute[189491]: 2025-12-01 09:40:12.980 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:40:14 compute-0 podman[250847]: 2025-12-01 09:40:14.739069017 +0000 UTC m=+0.095947559 container health_status 6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  1 09:40:14 compute-0 podman[250848]: 2025-12-01 09:40:14.750149573 +0000 UTC m=+0.096948294 container health_status ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.build-date=20251125, tcib_managed=true, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0)
Dec  1 09:40:15 compute-0 nova_compute[189491]: 2025-12-01 09:40:15.147 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:40:17 compute-0 nova_compute[189491]: 2025-12-01 09:40:17.984 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:40:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:40:19.789 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  1 09:40:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:40:19.790 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  1 09:40:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:40:19.790 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b020>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff85046f530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:40:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:40:19.792 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7ff84c98b0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:40:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:40:19.792 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84dc55100>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff85046f530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:40:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:40:19.793 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff85046f530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:40:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:40:19.793 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff85046f530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:40:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:40:19.794 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff85046f530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:40:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:40:19.794 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84ca1c260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff85046f530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:40:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:40:19.794 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff85046f530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:40:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:40:19.794 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff85046f530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:40:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:40:19.795 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84ff01af0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff85046f530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:40:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:40:19.795 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bb30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff85046f530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:40:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:40:19.795 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff85046f530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:40:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:40:19.795 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bb60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff85046f530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:40:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:40:19.796 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff85046f530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:40:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:40:19.797 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bbc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff85046f530>] with cache [{}], pollster history [{'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:40:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:40:19.797 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff85046f530>] with cache [{}], pollster history [{'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:40:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:40:19.798 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff85046f530>] with cache [{}], pollster history [{'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:40:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:40:19.798 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff85046f530>] with cache [{}], pollster history [{'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:40:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:40:19.798 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff85046f530>] with cache [{}], pollster history [{'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:40:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:40:19.799 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b5f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff85046f530>] with cache [{}], pollster history [{'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:40:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:40:19.799 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff85046f530>] with cache [{}], pollster history [{'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:40:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:40:19.799 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98be60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff85046f530>] with cache [{}], pollster history [{'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:40:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:40:19.800 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84f216690>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff85046f530>] with cache [{}], pollster history [{'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:40:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:40:19.800 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c9896d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff85046f530>] with cache [{}], pollster history [{'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:40:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:40:19.800 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff85046f530>] with cache [{}], pollster history [{'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:40:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:40:19.796 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:40:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:40:19.801 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7ff8501e1d00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:40:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:40:19.802 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:40:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:40:19.802 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7ff84c98b110>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:40:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:40:19.802 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:40:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:40:19.802 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7ff84c98b1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:40:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:40:19.803 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:40:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:40:19.803 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7ff84c98b200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:40:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:40:19.803 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:40:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:40:19.804 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7ff84ca1c230>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:40:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:40:19.804 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:40:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:40:19.804 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7ff84c98b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:40:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:40:19.805 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:40:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:40:19.805 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7ff84c98b2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:40:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:40:19.805 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:40:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:40:19.801 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98a720>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff85046f530>] with cache [{}], pollster history [{'disk.device.read.bytes': [], 'disk.device.allocation': [], 'disk.device.read.latency': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:40:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:40:19.806 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff85046f530>] with cache [{}], pollster history [{'disk.device.read.bytes': [], 'disk.device.allocation': [], 'disk.device.read.latency': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:40:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:40:19.805 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7ff84c98b620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:40:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:40:19.807 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:40:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:40:19.807 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7ff84c98b680>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:40:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:40:19.807 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:40:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:40:19.807 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7ff84c98b320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:40:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:40:19.807 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:40:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:40:19.807 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7ff84c98b920>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:40:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:40:19.808 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:40:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:40:19.808 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7ff84c98b380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:40:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:40:19.808 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:40:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:40:19.808 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7ff84c98bb90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:40:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:40:19.808 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:40:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:40:19.808 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7ff84c98bbf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:40:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:40:19.809 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:40:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:40:19.809 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7ff84c98bc80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:40:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:40:19.809 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:40:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:40:19.809 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7ff84c98bd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:40:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:40:19.809 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:40:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:40:19.810 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7ff84c98bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:40:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:40:19.810 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:40:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:40:19.810 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7ff84c98b5c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:40:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:40:19.810 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:40:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:40:19.810 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7ff84dc55040>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:40:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:40:19.810 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:40:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:40:19.811 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7ff84c98be30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:40:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:40:19.811 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:40:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:40:19.811 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7ff8503b1b80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:40:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:40:19.811 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:40:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:40:19.811 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7ff84dab3f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:40:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:40:19.812 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:40:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:40:19.812 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7ff84c98bec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:40:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:40:19.812 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:40:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:40:19.812 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7ff84c98b170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:40:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:40:19.812 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:40:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:40:19.812 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7ff84c98bf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:40:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:40:19.813 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:40:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:40:19.813 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:40:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:40:19.813 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:40:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:40:19.814 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:40:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:40:19.814 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:40:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:40:19.814 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:40:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:40:19.814 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:40:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:40:19.814 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:40:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:40:19.814 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:40:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:40:19.815 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:40:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:40:19.815 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:40:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:40:19.815 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:40:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:40:19.815 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:40:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:40:19.815 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:40:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:40:19.815 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:40:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:40:19.816 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:40:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:40:19.816 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:40:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:40:19.816 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:40:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:40:19.816 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:40:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:40:19.816 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:40:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:40:19.816 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:40:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:40:19.817 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:40:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:40:19.817 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:40:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:40:19.817 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:40:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:40:19.817 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:40:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:40:19.817 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:40:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:40:19.818 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:40:20 compute-0 nova_compute[189491]: 2025-12-01 09:40:20.150 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:40:21 compute-0 podman[250891]: 2025-12-01 09:40:21.766628277 +0000 UTC m=+0.127253059 container health_status e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm)
Dec  1 09:40:22 compute-0 nova_compute[189491]: 2025-12-01 09:40:22.986 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:40:23 compute-0 podman[250911]: 2025-12-01 09:40:23.722158558 +0000 UTC m=+0.090717340 container health_status f2d8639519b361376780abb83cb5a5e341c4f412720fb5852b2a4d261a69c359 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, managed_by=edpm_ansible, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, distribution-scope=public, com.redhat.component=ubi9-container, release=1214.1726694543, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.tags=base rhel9, config_id=edpm, version=9.4, architecture=x86_64, io.buildah.version=1.29.0, io.openshift.expose-services=)
Dec  1 09:40:23 compute-0 podman[250910]: 2025-12-01 09:40:23.750675768 +0000 UTC m=+0.116195445 container health_status dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  1 09:40:25 compute-0 nova_compute[189491]: 2025-12-01 09:40:25.155 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:40:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:40:26.530 106659 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:40:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:40:26.532 106659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:40:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:40:26.533 106659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:40:27 compute-0 nova_compute[189491]: 2025-12-01 09:40:27.988 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:40:29 compute-0 podman[203700]: time="2025-12-01T09:40:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 09:40:29 compute-0 podman[203700]: @ - - [01/Dec/2025:09:40:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28292 "" "Go-http-client/1.1"
Dec  1 09:40:29 compute-0 podman[203700]: @ - - [01/Dec/2025:09:40:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4343 "" "Go-http-client/1.1"
Dec  1 09:40:30 compute-0 nova_compute[189491]: 2025-12-01 09:40:30.159 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:40:30 compute-0 podman[250953]: 2025-12-01 09:40:30.720618102 +0000 UTC m=+0.090481494 container health_status 110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, maintainer=Red Hat, Inc., architecture=x86_64, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, release=1755695350, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec  1 09:40:30 compute-0 podman[250954]: 2025-12-01 09:40:30.743249316 +0000 UTC m=+0.105060848 container health_status f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Dec  1 09:40:31 compute-0 openstack_network_exporter[205866]: ERROR   09:40:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 09:40:31 compute-0 openstack_network_exporter[205866]: ERROR   09:40:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:40:31 compute-0 openstack_network_exporter[205866]: ERROR   09:40:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:40:31 compute-0 openstack_network_exporter[205866]: ERROR   09:40:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 09:40:31 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:40:31 compute-0 openstack_network_exporter[205866]: ERROR   09:40:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 09:40:31 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:40:32 compute-0 nova_compute[189491]: 2025-12-01 09:40:32.994 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:40:34 compute-0 podman[250991]: 2025-12-01 09:40:34.7259073 +0000 UTC m=+0.090101385 container health_status 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  1 09:40:34 compute-0 podman[250992]: 2025-12-01 09:40:34.7548247 +0000 UTC m=+0.112859962 container health_status 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec  1 09:40:35 compute-0 nova_compute[189491]: 2025-12-01 09:40:35.162 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:40:35 compute-0 nova_compute[189491]: 2025-12-01 09:40:35.715 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:40:35 compute-0 nova_compute[189491]: 2025-12-01 09:40:35.717 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 09:40:35 compute-0 nova_compute[189491]: 2025-12-01 09:40:35.717 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 09:40:35 compute-0 nova_compute[189491]: 2025-12-01 09:40:35.867 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  1 09:40:37 compute-0 nova_compute[189491]: 2025-12-01 09:40:37.994 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:40:40 compute-0 nova_compute[189491]: 2025-12-01 09:40:40.168 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:40:40 compute-0 nova_compute[189491]: 2025-12-01 09:40:40.716 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:40:41 compute-0 nova_compute[189491]: 2025-12-01 09:40:41.018 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:40:41 compute-0 nova_compute[189491]: 2025-12-01 09:40:41.018 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:40:41 compute-0 nova_compute[189491]: 2025-12-01 09:40:41.018 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:40:41 compute-0 nova_compute[189491]: 2025-12-01 09:40:41.018 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 09:40:41 compute-0 nova_compute[189491]: 2025-12-01 09:40:41.428 189495 WARNING nova.virt.libvirt.driver [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 09:40:41 compute-0 nova_compute[189491]: 2025-12-01 09:40:41.430 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5375MB free_disk=72.37950134277344GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 09:40:41 compute-0 nova_compute[189491]: 2025-12-01 09:40:41.430 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:40:41 compute-0 nova_compute[189491]: 2025-12-01 09:40:41.431 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:40:41 compute-0 nova_compute[189491]: 2025-12-01 09:40:41.743 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 09:40:41 compute-0 nova_compute[189491]: 2025-12-01 09:40:41.744 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 09:40:41 compute-0 nova_compute[189491]: 2025-12-01 09:40:41.779 189495 DEBUG nova.compute.provider_tree [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Inventory has not changed in ProviderTree for provider: 143c7fe7-af1f-477a-978c-6a994d785d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 09:40:42 compute-0 nova_compute[189491]: 2025-12-01 09:40:42.517 189495 DEBUG nova.scheduler.client.report [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Inventory has not changed for provider 143c7fe7-af1f-477a-978c-6a994d785d98 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 09:40:42 compute-0 nova_compute[189491]: 2025-12-01 09:40:42.520 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 09:40:42 compute-0 nova_compute[189491]: 2025-12-01 09:40:42.521 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.090s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:40:42 compute-0 nova_compute[189491]: 2025-12-01 09:40:42.997 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:40:44 compute-0 nova_compute[189491]: 2025-12-01 09:40:44.520 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:40:44 compute-0 nova_compute[189491]: 2025-12-01 09:40:44.521 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:40:44 compute-0 nova_compute[189491]: 2025-12-01 09:40:44.522 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:40:45 compute-0 nova_compute[189491]: 2025-12-01 09:40:45.174 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:40:45 compute-0 podman[251032]: 2025-12-01 09:40:45.695563003 +0000 UTC m=+0.068036985 container health_status 6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  1 09:40:45 compute-0 nova_compute[189491]: 2025-12-01 09:40:45.714 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:40:45 compute-0 nova_compute[189491]: 2025-12-01 09:40:45.714 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 09:40:45 compute-0 podman[251033]: 2025-12-01 09:40:45.726286128 +0000 UTC m=+0.092643588 container health_status ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec  1 09:40:46 compute-0 nova_compute[189491]: 2025-12-01 09:40:46.709 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:40:46 compute-0 nova_compute[189491]: 2025-12-01 09:40:46.713 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:40:48 compute-0 nova_compute[189491]: 2025-12-01 09:40:48.002 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:40:48 compute-0 nova_compute[189491]: 2025-12-01 09:40:48.714 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:40:50 compute-0 nova_compute[189491]: 2025-12-01 09:40:50.178 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:40:52 compute-0 podman[251073]: 2025-12-01 09:40:52.706046077 +0000 UTC m=+0.084234569 container health_status e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi)
Dec  1 09:40:53 compute-0 nova_compute[189491]: 2025-12-01 09:40:53.005 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:40:54 compute-0 podman[251092]: 2025-12-01 09:40:54.692474246 +0000 UTC m=+0.068859005 container health_status dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 09:40:54 compute-0 podman[251093]: 2025-12-01 09:40:54.703933092 +0000 UTC m=+0.071246455 container health_status f2d8639519b361376780abb83cb5a5e341c4f412720fb5852b2a4d261a69c359 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, vendor=Red Hat, Inc., vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2024-09-18T21:23:30, name=ubi9, architecture=x86_64, release=1214.1726694543, managed_by=edpm_ansible, config_id=edpm, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release-0.7.12=)
Dec  1 09:40:55 compute-0 nova_compute[189491]: 2025-12-01 09:40:55.183 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:40:58 compute-0 nova_compute[189491]: 2025-12-01 09:40:58.008 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:40:59 compute-0 podman[203700]: time="2025-12-01T09:40:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 09:40:59 compute-0 podman[203700]: @ - - [01/Dec/2025:09:40:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28292 "" "Go-http-client/1.1"
Dec  1 09:40:59 compute-0 podman[203700]: @ - - [01/Dec/2025:09:40:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4340 "" "Go-http-client/1.1"
Dec  1 09:41:00 compute-0 nova_compute[189491]: 2025-12-01 09:41:00.188 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:41:01 compute-0 openstack_network_exporter[205866]: ERROR   09:41:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:41:01 compute-0 openstack_network_exporter[205866]: ERROR   09:41:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:41:01 compute-0 openstack_network_exporter[205866]: ERROR   09:41:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 09:41:01 compute-0 openstack_network_exporter[205866]: ERROR   09:41:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 09:41:01 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:41:01 compute-0 openstack_network_exporter[205866]: ERROR   09:41:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 09:41:01 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:41:01 compute-0 podman[251135]: 2025-12-01 09:41:01.694587102 +0000 UTC m=+0.067320648 container health_status f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent)
Dec  1 09:41:01 compute-0 podman[251134]: 2025-12-01 09:41:01.700216792 +0000 UTC m=+0.075257885 container health_status 110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., name=ubi9-minimal, vendor=Red Hat, Inc., architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, config_id=edpm, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, io.openshift.tags=minimal rhel9)
Dec  1 09:41:03 compute-0 nova_compute[189491]: 2025-12-01 09:41:03.012 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:41:05 compute-0 nova_compute[189491]: 2025-12-01 09:41:05.193 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:41:05 compute-0 podman[251168]: 2025-12-01 09:41:05.720683199 +0000 UTC m=+0.089162732 container health_status 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  1 09:41:05 compute-0 podman[251169]: 2025-12-01 09:41:05.807878019 +0000 UTC m=+0.159778479 container health_status 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller)
Dec  1 09:41:08 compute-0 nova_compute[189491]: 2025-12-01 09:41:08.017 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:41:10 compute-0 nova_compute[189491]: 2025-12-01 09:41:10.196 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:41:13 compute-0 nova_compute[189491]: 2025-12-01 09:41:13.021 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:41:15 compute-0 nova_compute[189491]: 2025-12-01 09:41:15.202 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:41:16 compute-0 podman[251211]: 2025-12-01 09:41:16.954817227 +0000 UTC m=+0.114066130 container health_status 6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  1 09:41:16 compute-0 podman[251212]: 2025-12-01 09:41:16.981177304 +0000 UTC m=+0.121087595 container health_status ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.license=GPLv2, config_id=edpm, container_name=ceilometer_agent_compute, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  1 09:41:18 compute-0 nova_compute[189491]: 2025-12-01 09:41:18.023 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:41:20 compute-0 nova_compute[189491]: 2025-12-01 09:41:20.208 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:41:23 compute-0 nova_compute[189491]: 2025-12-01 09:41:23.026 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:41:23 compute-0 podman[251255]: 2025-12-01 09:41:23.738689352 +0000 UTC m=+0.104366602 container health_status e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team)
Dec  1 09:41:25 compute-0 nova_compute[189491]: 2025-12-01 09:41:25.213 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:41:25 compute-0 podman[251273]: 2025-12-01 09:41:25.71626409 +0000 UTC m=+0.084462704 container health_status dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  1 09:41:25 compute-0 podman[251274]: 2025-12-01 09:41:25.779154026 +0000 UTC m=+0.136445839 container health_status f2d8639519b361376780abb83cb5a5e341c4f412720fb5852b2a4d261a69c359 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, config_id=edpm, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, com.redhat.component=ubi9-container, managed_by=edpm_ansible, name=ubi9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, io.openshift.expose-services=, release-0.7.12=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., version=9.4, io.openshift.tags=base rhel9)
Dec  1 09:41:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:41:26.531 106659 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:41:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:41:26.532 106659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:41:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:41:26.532 106659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:41:28 compute-0 nova_compute[189491]: 2025-12-01 09:41:28.029 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:41:28 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:41:28.311 106659 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:2b:76', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'f6:fe:a3:90:0a:20'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 09:41:28 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:41:28.313 106659 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  1 09:41:28 compute-0 nova_compute[189491]: 2025-12-01 09:41:28.315 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:41:29 compute-0 podman[203700]: time="2025-12-01T09:41:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 09:41:29 compute-0 podman[203700]: @ - - [01/Dec/2025:09:41:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28292 "" "Go-http-client/1.1"
Dec  1 09:41:29 compute-0 podman[203700]: @ - - [01/Dec/2025:09:41:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4341 "" "Go-http-client/1.1"
Dec  1 09:41:30 compute-0 nova_compute[189491]: 2025-12-01 09:41:30.219 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:41:31 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:41:31.318 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=203a4433-d8f4-4d80-8084-548a6d57cd5d, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:41:31 compute-0 openstack_network_exporter[205866]: ERROR   09:41:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 09:41:31 compute-0 openstack_network_exporter[205866]: ERROR   09:41:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:41:31 compute-0 openstack_network_exporter[205866]: ERROR   09:41:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:41:31 compute-0 openstack_network_exporter[205866]: ERROR   09:41:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 09:41:31 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:41:31 compute-0 openstack_network_exporter[205866]: ERROR   09:41:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 09:41:31 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:41:32 compute-0 podman[251316]: 2025-12-01 09:41:32.706940521 +0000 UTC m=+0.078385593 container health_status 110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., managed_by=edpm_ansible, name=ubi9-minimal, build-date=2025-08-20T13:12:41, config_id=edpm, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.buildah.version=1.33.7, io.openshift.expose-services=, architecture=x86_64, version=9.6, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  1 09:41:32 compute-0 podman[251317]: 2025-12-01 09:41:32.721860522 +0000 UTC m=+0.085768907 container health_status f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team)
Dec  1 09:41:33 compute-0 nova_compute[189491]: 2025-12-01 09:41:33.034 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:41:35 compute-0 nova_compute[189491]: 2025-12-01 09:41:35.225 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:41:36 compute-0 podman[251356]: 2025-12-01 09:41:36.739299662 +0000 UTC m=+0.109514178 container health_status 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 09:41:36 compute-0 podman[251357]: 2025-12-01 09:41:36.758313395 +0000 UTC m=+0.113733333 container health_status 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  1 09:41:37 compute-0 nova_compute[189491]: 2025-12-01 09:41:37.715 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:41:37 compute-0 nova_compute[189491]: 2025-12-01 09:41:37.715 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 09:41:37 compute-0 nova_compute[189491]: 2025-12-01 09:41:37.715 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 09:41:37 compute-0 nova_compute[189491]: 2025-12-01 09:41:37.738 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  1 09:41:38 compute-0 nova_compute[189491]: 2025-12-01 09:41:38.035 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:41:40 compute-0 nova_compute[189491]: 2025-12-01 09:41:40.229 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:41:42 compute-0 nova_compute[189491]: 2025-12-01 09:41:42.714 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:41:42 compute-0 nova_compute[189491]: 2025-12-01 09:41:42.745 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:41:42 compute-0 nova_compute[189491]: 2025-12-01 09:41:42.747 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:41:42 compute-0 nova_compute[189491]: 2025-12-01 09:41:42.748 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:41:42 compute-0 nova_compute[189491]: 2025-12-01 09:41:42.749 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 09:41:43 compute-0 nova_compute[189491]: 2025-12-01 09:41:43.036 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:41:43 compute-0 nova_compute[189491]: 2025-12-01 09:41:43.109 189495 WARNING nova.virt.libvirt.driver [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 09:41:43 compute-0 nova_compute[189491]: 2025-12-01 09:41:43.110 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5363MB free_disk=72.37884521484375GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 09:41:43 compute-0 nova_compute[189491]: 2025-12-01 09:41:43.110 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:41:43 compute-0 nova_compute[189491]: 2025-12-01 09:41:43.111 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:41:43 compute-0 nova_compute[189491]: 2025-12-01 09:41:43.206 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 09:41:43 compute-0 nova_compute[189491]: 2025-12-01 09:41:43.207 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 09:41:43 compute-0 nova_compute[189491]: 2025-12-01 09:41:43.274 189495 DEBUG nova.compute.provider_tree [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Inventory has not changed in ProviderTree for provider: 143c7fe7-af1f-477a-978c-6a994d785d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 09:41:43 compute-0 nova_compute[189491]: 2025-12-01 09:41:43.294 189495 DEBUG nova.scheduler.client.report [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Inventory has not changed for provider 143c7fe7-af1f-477a-978c-6a994d785d98 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 09:41:43 compute-0 nova_compute[189491]: 2025-12-01 09:41:43.296 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 09:41:43 compute-0 nova_compute[189491]: 2025-12-01 09:41:43.296 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.186s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:41:44 compute-0 nova_compute[189491]: 2025-12-01 09:41:44.296 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:41:44 compute-0 nova_compute[189491]: 2025-12-01 09:41:44.296 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:41:44 compute-0 nova_compute[189491]: 2025-12-01 09:41:44.714 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:41:45 compute-0 nova_compute[189491]: 2025-12-01 09:41:45.235 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:41:46 compute-0 nova_compute[189491]: 2025-12-01 09:41:46.708 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:41:47 compute-0 podman[251402]: 2025-12-01 09:41:47.680590769 +0000 UTC m=+0.060436257 container health_status 6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  1 09:41:47 compute-0 nova_compute[189491]: 2025-12-01 09:41:47.714 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:41:47 compute-0 nova_compute[189491]: 2025-12-01 09:41:47.714 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:41:47 compute-0 nova_compute[189491]: 2025-12-01 09:41:47.714 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 09:41:47 compute-0 podman[251403]: 2025-12-01 09:41:47.724883862 +0000 UTC m=+0.100881264 container health_status ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, managed_by=edpm_ansible)
Dec  1 09:41:48 compute-0 nova_compute[189491]: 2025-12-01 09:41:48.038 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:41:50 compute-0 nova_compute[189491]: 2025-12-01 09:41:50.240 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:41:50 compute-0 nova_compute[189491]: 2025-12-01 09:41:50.709 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:41:50 compute-0 nova_compute[189491]: 2025-12-01 09:41:50.726 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:41:53 compute-0 nova_compute[189491]: 2025-12-01 09:41:53.043 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:41:54 compute-0 podman[251445]: 2025-12-01 09:41:54.707634965 +0000 UTC m=+0.076154276 container health_status e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, managed_by=edpm_ansible)
Dec  1 09:41:55 compute-0 nova_compute[189491]: 2025-12-01 09:41:55.246 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:41:56 compute-0 podman[251463]: 2025-12-01 09:41:56.715593932 +0000 UTC m=+0.094640188 container health_status dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  1 09:41:56 compute-0 podman[251464]: 2025-12-01 09:41:56.741376544 +0000 UTC m=+0.115094498 container health_status f2d8639519b361376780abb83cb5a5e341c4f412720fb5852b2a4d261a69c359 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, version=9.4, container_name=kepler, managed_by=edpm_ansible, io.buildah.version=1.29.0, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.expose-services=, io.openshift.tags=base rhel9, vcs-type=git, name=ubi9, release=1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Dec  1 09:41:58 compute-0 nova_compute[189491]: 2025-12-01 09:41:58.046 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:41:58 compute-0 ovn_controller[97794]: 2025-12-01T09:41:58Z|00065|memory_trim|INFO|Detected inactivity (last active 30018 ms ago): trimming memory
Dec  1 09:41:59 compute-0 podman[203700]: time="2025-12-01T09:41:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 09:41:59 compute-0 podman[203700]: @ - - [01/Dec/2025:09:41:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28292 "" "Go-http-client/1.1"
Dec  1 09:41:59 compute-0 podman[203700]: @ - - [01/Dec/2025:09:41:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4341 "" "Go-http-client/1.1"
Dec  1 09:42:00 compute-0 nova_compute[189491]: 2025-12-01 09:42:00.251 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:42:01 compute-0 openstack_network_exporter[205866]: ERROR   09:42:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 09:42:01 compute-0 openstack_network_exporter[205866]: ERROR   09:42:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:42:01 compute-0 openstack_network_exporter[205866]: ERROR   09:42:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:42:01 compute-0 openstack_network_exporter[205866]: ERROR   09:42:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 09:42:01 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:42:01 compute-0 openstack_network_exporter[205866]: ERROR   09:42:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 09:42:01 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:42:03 compute-0 nova_compute[189491]: 2025-12-01 09:42:03.050 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:42:03 compute-0 podman[251507]: 2025-12-01 09:42:03.710796376 +0000 UTC m=+0.074710031 container health_status f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec  1 09:42:03 compute-0 podman[251506]: 2025-12-01 09:42:03.717890772 +0000 UTC m=+0.089152720 container health_status 110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, config_id=edpm, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, name=ubi9-minimal, container_name=openstack_network_exporter, io.openshift.expose-services=, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, build-date=2025-08-20T13:12:41)
Dec  1 09:42:05 compute-0 nova_compute[189491]: 2025-12-01 09:42:05.256 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:42:05 compute-0 nova_compute[189491]: 2025-12-01 09:42:05.519 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:42:07 compute-0 nova_compute[189491]: 2025-12-01 09:42:07.482 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:42:07 compute-0 nova_compute[189491]: 2025-12-01 09:42:07.634 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:42:07 compute-0 podman[251541]: 2025-12-01 09:42:07.713407616 +0000 UTC m=+0.075260326 container health_status 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd)
Dec  1 09:42:07 compute-0 podman[251542]: 2025-12-01 09:42:07.769125154 +0000 UTC m=+0.119346894 container health_status 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 09:42:08 compute-0 nova_compute[189491]: 2025-12-01 09:42:08.012 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:42:08 compute-0 nova_compute[189491]: 2025-12-01 09:42:08.053 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:42:10 compute-0 nova_compute[189491]: 2025-12-01 09:42:10.262 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:42:10 compute-0 nova_compute[189491]: 2025-12-01 09:42:10.835 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:42:13 compute-0 nova_compute[189491]: 2025-12-01 09:42:13.057 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:42:15 compute-0 nova_compute[189491]: 2025-12-01 09:42:15.268 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:42:16 compute-0 nova_compute[189491]: 2025-12-01 09:42:16.305 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:42:16 compute-0 nova_compute[189491]: 2025-12-01 09:42:16.669 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:42:16 compute-0 nova_compute[189491]: 2025-12-01 09:42:16.693 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:42:18 compute-0 nova_compute[189491]: 2025-12-01 09:42:18.058 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:42:18 compute-0 podman[251589]: 2025-12-01 09:42:18.705808677 +0000 UTC m=+0.075672305 container health_status ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm)
Dec  1 09:42:18 compute-0 podman[251588]: 2025-12-01 09:42:18.730618775 +0000 UTC m=+0.101248193 container health_status 6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  1 09:42:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:42:19.790 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  1 09:42:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:42:19.791 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  1 09:42:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:42:19.791 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b020>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff85046f530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:42:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:42:19.792 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7ff84c98b0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:42:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:42:19.793 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84dc55100>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff85046f530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:42:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:42:19.793 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff85046f530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:42:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:42:19.793 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff85046f530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:42:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:42:19.793 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff85046f530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:42:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:42:19.793 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84ca1c260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff85046f530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:42:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:42:19.794 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff85046f530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:42:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:42:19.794 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff85046f530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:42:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:42:19.794 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84ff01af0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff85046f530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:42:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:42:19.794 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bb30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff85046f530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:42:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:42:19.794 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff85046f530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:42:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:42:19.795 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bb60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff85046f530>] with cache [{}], pollster history [{'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:42:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:42:19.795 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff85046f530>] with cache [{}], pollster history [{'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:42:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:42:19.795 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bbc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff85046f530>] with cache [{}], pollster history [{'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:42:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:42:19.796 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff85046f530>] with cache [{}], pollster history [{'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:42:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:42:19.796 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff85046f530>] with cache [{}], pollster history [{'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:42:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:42:19.796 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff85046f530>] with cache [{}], pollster history [{'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:42:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:42:19.797 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff85046f530>] with cache [{}], pollster history [{'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:42:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:42:19.797 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b5f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff85046f530>] with cache [{}], pollster history [{'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:42:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:42:19.797 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff85046f530>] with cache [{}], pollster history [{'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:42:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:42:19.795 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:42:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:42:19.798 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7ff8501e1d00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:42:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:42:19.798 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:42:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:42:19.798 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98be60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff85046f530>] with cache [{}], pollster history [{'disk.device.read.bytes': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:42:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:42:19.799 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84f216690>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff85046f530>] with cache [{}], pollster history [{'disk.device.read.bytes': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:42:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:42:19.800 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c9896d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff85046f530>] with cache [{}], pollster history [{'disk.device.read.bytes': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:42:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:42:19.800 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff85046f530>] with cache [{}], pollster history [{'disk.device.read.bytes': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:42:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:42:19.801 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98a720>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff85046f530>] with cache [{}], pollster history [{'disk.device.read.bytes': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:42:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:42:19.801 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff85046f530>] with cache [{}], pollster history [{'disk.device.read.bytes': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:42:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:42:19.799 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7ff84c98b110>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:42:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:42:19.802 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:42:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:42:19.802 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7ff84c98b1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:42:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:42:19.802 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:42:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:42:19.802 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7ff84c98b200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:42:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:42:19.802 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:42:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:42:19.803 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7ff84ca1c230>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:42:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:42:19.803 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:42:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:42:19.803 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7ff84c98b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:42:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:42:19.803 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:42:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:42:19.803 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7ff84c98b2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:42:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:42:19.803 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:42:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:42:19.804 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7ff84c98b620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:42:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:42:19.804 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:42:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:42:19.804 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7ff84c98b680>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:42:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:42:19.804 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:42:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:42:19.805 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7ff84c98b320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:42:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:42:19.805 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:42:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:42:19.805 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7ff84c98b920>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:42:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:42:19.806 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:42:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:42:19.806 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7ff84c98b380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:42:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:42:19.806 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:42:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:42:19.806 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7ff84c98bb90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:42:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:42:19.807 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:42:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:42:19.807 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7ff84c98bbf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:42:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:42:19.807 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:42:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:42:19.807 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7ff84c98bc80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:42:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:42:19.807 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:42:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:42:19.808 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7ff84c98bd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:42:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:42:19.808 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:42:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:42:19.808 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7ff84c98bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:42:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:42:19.808 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:42:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:42:19.808 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7ff84c98b5c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:42:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:42:19.808 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:42:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:42:19.809 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7ff84dc55040>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:42:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:42:19.809 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:42:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:42:19.809 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7ff84c98be30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:42:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:42:19.809 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:42:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:42:19.810 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7ff8503b1b80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:42:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:42:19.810 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:42:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:42:19.810 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7ff84dab3f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:42:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:42:19.810 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:42:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:42:19.810 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7ff84c98bec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:42:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:42:19.810 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:42:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:42:19.810 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7ff84c98b170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:42:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:42:19.811 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:42:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:42:19.811 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7ff84c98bf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:42:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:42:19.811 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:42:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:42:19.811 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:42:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:42:19.812 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:42:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:42:19.812 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:42:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:42:19.812 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:42:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:42:19.812 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:42:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:42:19.813 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:42:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:42:19.813 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:42:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:42:19.813 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:42:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:42:19.814 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:42:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:42:19.814 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:42:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:42:19.814 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:42:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:42:19.814 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:42:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:42:19.815 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:42:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:42:19.815 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:42:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:42:19.815 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:42:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:42:19.815 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:42:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:42:19.815 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:42:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:42:19.816 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:42:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:42:19.816 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:42:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:42:19.816 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:42:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:42:19.816 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:42:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:42:19.816 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:42:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:42:19.817 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:42:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:42:19.817 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:42:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:42:19.817 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:42:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:42:19.817 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:42:20 compute-0 nova_compute[189491]: 2025-12-01 09:42:20.272 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:42:23 compute-0 nova_compute[189491]: 2025-12-01 09:42:23.062 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:42:25 compute-0 nova_compute[189491]: 2025-12-01 09:42:25.278 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:42:25 compute-0 podman[251634]: 2025-12-01 09:42:25.763279911 +0000 UTC m=+0.135804923 container health_status e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  1 09:42:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:42:26.533 106659 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:42:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:42:26.533 106659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:42:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:42:26.533 106659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:42:27 compute-0 podman[251654]: 2025-12-01 09:42:27.712156276 +0000 UTC m=+0.074067026 container health_status dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  1 09:42:27 compute-0 podman[251655]: 2025-12-01 09:42:27.721753795 +0000 UTC m=+0.077575933 container health_status f2d8639519b361376780abb83cb5a5e341c4f412720fb5852b2a4d261a69c359 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., name=ubi9, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release=1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, vcs-type=git, version=9.4, architecture=x86_64, distribution-scope=public, managed_by=edpm_ansible, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Dec  1 09:42:28 compute-0 nova_compute[189491]: 2025-12-01 09:42:28.065 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:42:28 compute-0 nova_compute[189491]: 2025-12-01 09:42:28.616 189495 DEBUG oslo_concurrency.lockutils [None req-c8442ce9-a144-480d-8614-19a73188c31c 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] Acquiring lock "38643437-7822-4834-8301-02d3402cad15" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:42:28 compute-0 nova_compute[189491]: 2025-12-01 09:42:28.617 189495 DEBUG oslo_concurrency.lockutils [None req-c8442ce9-a144-480d-8614-19a73188c31c 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] Lock "38643437-7822-4834-8301-02d3402cad15" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:42:28 compute-0 nova_compute[189491]: 2025-12-01 09:42:28.638 189495 DEBUG nova.compute.manager [None req-c8442ce9-a144-480d-8614-19a73188c31c 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] [instance: 38643437-7822-4834-8301-02d3402cad15] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  1 09:42:28 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:42:28.720 106659 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:2b:76', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'f6:fe:a3:90:0a:20'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 09:42:28 compute-0 nova_compute[189491]: 2025-12-01 09:42:28.721 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:42:28 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:42:28.722 106659 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  1 09:42:28 compute-0 nova_compute[189491]: 2025-12-01 09:42:28.756 189495 DEBUG oslo_concurrency.lockutils [None req-c8442ce9-a144-480d-8614-19a73188c31c 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:42:28 compute-0 nova_compute[189491]: 2025-12-01 09:42:28.757 189495 DEBUG oslo_concurrency.lockutils [None req-c8442ce9-a144-480d-8614-19a73188c31c 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:42:28 compute-0 nova_compute[189491]: 2025-12-01 09:42:28.768 189495 DEBUG nova.virt.hardware [None req-c8442ce9-a144-480d-8614-19a73188c31c 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  1 09:42:28 compute-0 nova_compute[189491]: 2025-12-01 09:42:28.769 189495 INFO nova.compute.claims [None req-c8442ce9-a144-480d-8614-19a73188c31c 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] [instance: 38643437-7822-4834-8301-02d3402cad15] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  1 09:42:28 compute-0 nova_compute[189491]: 2025-12-01 09:42:28.805 189495 DEBUG oslo_concurrency.lockutils [None req-2f0731c4-fe5c-4999-81fd-54e75f32dfb4 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] Acquiring lock "cd1ac331-c146-4eb5-bc53-42a82dd3467b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:42:28 compute-0 nova_compute[189491]: 2025-12-01 09:42:28.805 189495 DEBUG oslo_concurrency.lockutils [None req-2f0731c4-fe5c-4999-81fd-54e75f32dfb4 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] Lock "cd1ac331-c146-4eb5-bc53-42a82dd3467b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:42:28 compute-0 nova_compute[189491]: 2025-12-01 09:42:28.867 189495 DEBUG nova.compute.manager [None req-2f0731c4-fe5c-4999-81fd-54e75f32dfb4 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] [instance: cd1ac331-c146-4eb5-bc53-42a82dd3467b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  1 09:42:28 compute-0 nova_compute[189491]: 2025-12-01 09:42:28.986 189495 DEBUG oslo_concurrency.lockutils [None req-2f0731c4-fe5c-4999-81fd-54e75f32dfb4 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:42:29 compute-0 nova_compute[189491]: 2025-12-01 09:42:29.027 189495 DEBUG nova.compute.provider_tree [None req-c8442ce9-a144-480d-8614-19a73188c31c 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] Inventory has not changed in ProviderTree for provider: 143c7fe7-af1f-477a-978c-6a994d785d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 09:42:29 compute-0 nova_compute[189491]: 2025-12-01 09:42:29.057 189495 DEBUG nova.scheduler.client.report [None req-c8442ce9-a144-480d-8614-19a73188c31c 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] Inventory has not changed for provider 143c7fe7-af1f-477a-978c-6a994d785d98 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 09:42:29 compute-0 nova_compute[189491]: 2025-12-01 09:42:29.087 189495 DEBUG oslo_concurrency.lockutils [None req-c8442ce9-a144-480d-8614-19a73188c31c 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.330s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:42:29 compute-0 nova_compute[189491]: 2025-12-01 09:42:29.088 189495 DEBUG nova.compute.manager [None req-c8442ce9-a144-480d-8614-19a73188c31c 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] [instance: 38643437-7822-4834-8301-02d3402cad15] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  1 09:42:29 compute-0 nova_compute[189491]: 2025-12-01 09:42:29.091 189495 DEBUG oslo_concurrency.lockutils [None req-2f0731c4-fe5c-4999-81fd-54e75f32dfb4 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.105s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:42:29 compute-0 nova_compute[189491]: 2025-12-01 09:42:29.100 189495 DEBUG nova.virt.hardware [None req-2f0731c4-fe5c-4999-81fd-54e75f32dfb4 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  1 09:42:29 compute-0 nova_compute[189491]: 2025-12-01 09:42:29.101 189495 INFO nova.compute.claims [None req-2f0731c4-fe5c-4999-81fd-54e75f32dfb4 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] [instance: cd1ac331-c146-4eb5-bc53-42a82dd3467b] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  1 09:42:29 compute-0 nova_compute[189491]: 2025-12-01 09:42:29.170 189495 DEBUG nova.compute.manager [None req-c8442ce9-a144-480d-8614-19a73188c31c 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] [instance: 38643437-7822-4834-8301-02d3402cad15] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  1 09:42:29 compute-0 nova_compute[189491]: 2025-12-01 09:42:29.170 189495 DEBUG nova.network.neutron [None req-c8442ce9-a144-480d-8614-19a73188c31c 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] [instance: 38643437-7822-4834-8301-02d3402cad15] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  1 09:42:29 compute-0 nova_compute[189491]: 2025-12-01 09:42:29.193 189495 INFO nova.virt.libvirt.driver [None req-c8442ce9-a144-480d-8614-19a73188c31c 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] [instance: 38643437-7822-4834-8301-02d3402cad15] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  1 09:42:29 compute-0 nova_compute[189491]: 2025-12-01 09:42:29.214 189495 DEBUG nova.compute.manager [None req-c8442ce9-a144-480d-8614-19a73188c31c 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] [instance: 38643437-7822-4834-8301-02d3402cad15] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  1 09:42:29 compute-0 nova_compute[189491]: 2025-12-01 09:42:29.276 189495 DEBUG nova.compute.provider_tree [None req-2f0731c4-fe5c-4999-81fd-54e75f32dfb4 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] Inventory has not changed in ProviderTree for provider: 143c7fe7-af1f-477a-978c-6a994d785d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 09:42:29 compute-0 nova_compute[189491]: 2025-12-01 09:42:29.304 189495 DEBUG nova.scheduler.client.report [None req-2f0731c4-fe5c-4999-81fd-54e75f32dfb4 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] Inventory has not changed for provider 143c7fe7-af1f-477a-978c-6a994d785d98 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 09:42:29 compute-0 nova_compute[189491]: 2025-12-01 09:42:29.331 189495 DEBUG nova.compute.manager [None req-c8442ce9-a144-480d-8614-19a73188c31c 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] [instance: 38643437-7822-4834-8301-02d3402cad15] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  1 09:42:29 compute-0 nova_compute[189491]: 2025-12-01 09:42:29.333 189495 DEBUG nova.virt.libvirt.driver [None req-c8442ce9-a144-480d-8614-19a73188c31c 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] [instance: 38643437-7822-4834-8301-02d3402cad15] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  1 09:42:29 compute-0 nova_compute[189491]: 2025-12-01 09:42:29.333 189495 INFO nova.virt.libvirt.driver [None req-c8442ce9-a144-480d-8614-19a73188c31c 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] [instance: 38643437-7822-4834-8301-02d3402cad15] Creating image(s)#033[00m
Dec  1 09:42:29 compute-0 nova_compute[189491]: 2025-12-01 09:42:29.334 189495 DEBUG oslo_concurrency.lockutils [None req-c8442ce9-a144-480d-8614-19a73188c31c 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] Acquiring lock "/var/lib/nova/instances/38643437-7822-4834-8301-02d3402cad15/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:42:29 compute-0 nova_compute[189491]: 2025-12-01 09:42:29.334 189495 DEBUG oslo_concurrency.lockutils [None req-c8442ce9-a144-480d-8614-19a73188c31c 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] Lock "/var/lib/nova/instances/38643437-7822-4834-8301-02d3402cad15/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:42:29 compute-0 nova_compute[189491]: 2025-12-01 09:42:29.335 189495 DEBUG oslo_concurrency.lockutils [None req-c8442ce9-a144-480d-8614-19a73188c31c 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] Lock "/var/lib/nova/instances/38643437-7822-4834-8301-02d3402cad15/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:42:29 compute-0 nova_compute[189491]: 2025-12-01 09:42:29.335 189495 DEBUG oslo_concurrency.lockutils [None req-c8442ce9-a144-480d-8614-19a73188c31c 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] Acquiring lock "bfffb0fe9ffb8885e11e7a8e92aeafe5ed4e87fd" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:42:29 compute-0 nova_compute[189491]: 2025-12-01 09:42:29.336 189495 DEBUG oslo_concurrency.lockutils [None req-c8442ce9-a144-480d-8614-19a73188c31c 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] Lock "bfffb0fe9ffb8885e11e7a8e92aeafe5ed4e87fd" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:42:29 compute-0 nova_compute[189491]: 2025-12-01 09:42:29.341 189495 DEBUG oslo_concurrency.lockutils [None req-2f0731c4-fe5c-4999-81fd-54e75f32dfb4 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.249s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:42:29 compute-0 nova_compute[189491]: 2025-12-01 09:42:29.341 189495 DEBUG nova.compute.manager [None req-2f0731c4-fe5c-4999-81fd-54e75f32dfb4 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] [instance: cd1ac331-c146-4eb5-bc53-42a82dd3467b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  1 09:42:29 compute-0 nova_compute[189491]: 2025-12-01 09:42:29.384 189495 DEBUG nova.compute.manager [None req-2f0731c4-fe5c-4999-81fd-54e75f32dfb4 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] [instance: cd1ac331-c146-4eb5-bc53-42a82dd3467b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  1 09:42:29 compute-0 nova_compute[189491]: 2025-12-01 09:42:29.386 189495 DEBUG nova.network.neutron [None req-2f0731c4-fe5c-4999-81fd-54e75f32dfb4 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] [instance: cd1ac331-c146-4eb5-bc53-42a82dd3467b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  1 09:42:29 compute-0 nova_compute[189491]: 2025-12-01 09:42:29.404 189495 INFO nova.virt.libvirt.driver [None req-2f0731c4-fe5c-4999-81fd-54e75f32dfb4 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] [instance: cd1ac331-c146-4eb5-bc53-42a82dd3467b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  1 09:42:29 compute-0 nova_compute[189491]: 2025-12-01 09:42:29.431 189495 DEBUG nova.compute.manager [None req-2f0731c4-fe5c-4999-81fd-54e75f32dfb4 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] [instance: cd1ac331-c146-4eb5-bc53-42a82dd3467b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  1 09:42:29 compute-0 nova_compute[189491]: 2025-12-01 09:42:29.586 189495 DEBUG nova.compute.manager [None req-2f0731c4-fe5c-4999-81fd-54e75f32dfb4 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] [instance: cd1ac331-c146-4eb5-bc53-42a82dd3467b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  1 09:42:29 compute-0 nova_compute[189491]: 2025-12-01 09:42:29.588 189495 DEBUG nova.virt.libvirt.driver [None req-2f0731c4-fe5c-4999-81fd-54e75f32dfb4 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] [instance: cd1ac331-c146-4eb5-bc53-42a82dd3467b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  1 09:42:29 compute-0 nova_compute[189491]: 2025-12-01 09:42:29.588 189495 INFO nova.virt.libvirt.driver [None req-2f0731c4-fe5c-4999-81fd-54e75f32dfb4 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] [instance: cd1ac331-c146-4eb5-bc53-42a82dd3467b] Creating image(s)#033[00m
Dec  1 09:42:29 compute-0 nova_compute[189491]: 2025-12-01 09:42:29.589 189495 DEBUG oslo_concurrency.lockutils [None req-2f0731c4-fe5c-4999-81fd-54e75f32dfb4 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] Acquiring lock "/var/lib/nova/instances/cd1ac331-c146-4eb5-bc53-42a82dd3467b/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:42:29 compute-0 nova_compute[189491]: 2025-12-01 09:42:29.589 189495 DEBUG oslo_concurrency.lockutils [None req-2f0731c4-fe5c-4999-81fd-54e75f32dfb4 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] Lock "/var/lib/nova/instances/cd1ac331-c146-4eb5-bc53-42a82dd3467b/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:42:29 compute-0 nova_compute[189491]: 2025-12-01 09:42:29.590 189495 DEBUG oslo_concurrency.lockutils [None req-2f0731c4-fe5c-4999-81fd-54e75f32dfb4 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] Lock "/var/lib/nova/instances/cd1ac331-c146-4eb5-bc53-42a82dd3467b/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:42:29 compute-0 nova_compute[189491]: 2025-12-01 09:42:29.590 189495 DEBUG oslo_concurrency.lockutils [None req-2f0731c4-fe5c-4999-81fd-54e75f32dfb4 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] Acquiring lock "bfffb0fe9ffb8885e11e7a8e92aeafe5ed4e87fd" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:42:29 compute-0 podman[203700]: time="2025-12-01T09:42:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 09:42:29 compute-0 podman[203700]: @ - - [01/Dec/2025:09:42:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28292 "" "Go-http-client/1.1"
Dec  1 09:42:29 compute-0 podman[203700]: @ - - [01/Dec/2025:09:42:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4336 "" "Go-http-client/1.1"
Dec  1 09:42:30 compute-0 nova_compute[189491]: 2025-12-01 09:42:30.098 189495 DEBUG nova.policy [None req-c8442ce9-a144-480d-8614-19a73188c31c 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '688e0c65604244fb9d423018bc88d238', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd7764856ebb94acbaa0b40cbbf09cb3d', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  1 09:42:30 compute-0 nova_compute[189491]: 2025-12-01 09:42:30.282 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:42:30 compute-0 nova_compute[189491]: 2025-12-01 09:42:30.534 189495 DEBUG nova.policy [None req-2f0731c4-fe5c-4999-81fd-54e75f32dfb4 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f4e22b2cefdd467b833f8e2b663a0b75', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5cce108434ca43799d8b26b6c7f91b2d', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  1 09:42:31 compute-0 openstack_network_exporter[205866]: ERROR   09:42:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 09:42:31 compute-0 openstack_network_exporter[205866]: ERROR   09:42:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:42:31 compute-0 openstack_network_exporter[205866]: ERROR   09:42:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:42:31 compute-0 openstack_network_exporter[205866]: ERROR   09:42:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 09:42:31 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:42:31 compute-0 openstack_network_exporter[205866]: ERROR   09:42:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 09:42:31 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:42:31 compute-0 nova_compute[189491]: 2025-12-01 09:42:31.713 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:42:31 compute-0 nova_compute[189491]: 2025-12-01 09:42:31.714 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  1 09:42:33 compute-0 nova_compute[189491]: 2025-12-01 09:42:33.068 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:42:33 compute-0 nova_compute[189491]: 2025-12-01 09:42:33.249 189495 DEBUG oslo_concurrency.processutils [None req-c8442ce9-a144-480d-8614-19a73188c31c 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bfffb0fe9ffb8885e11e7a8e92aeafe5ed4e87fd.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:42:33 compute-0 nova_compute[189491]: 2025-12-01 09:42:33.273 189495 DEBUG nova.network.neutron [None req-c8442ce9-a144-480d-8614-19a73188c31c 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] [instance: 38643437-7822-4834-8301-02d3402cad15] Successfully created port: 7d0f49f6-e0e1-44b1-be36-fa4df3220ddb _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  1 09:42:33 compute-0 nova_compute[189491]: 2025-12-01 09:42:33.320 189495 DEBUG oslo_concurrency.processutils [None req-c8442ce9-a144-480d-8614-19a73188c31c 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bfffb0fe9ffb8885e11e7a8e92aeafe5ed4e87fd.part --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:42:33 compute-0 nova_compute[189491]: 2025-12-01 09:42:33.321 189495 DEBUG nova.virt.images [None req-c8442ce9-a144-480d-8614-19a73188c31c 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] 7ddeffd1-d06f-4a46-9e41-114974daa90e was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Dec  1 09:42:33 compute-0 nova_compute[189491]: 2025-12-01 09:42:33.323 189495 DEBUG nova.privsep.utils [None req-c8442ce9-a144-480d-8614-19a73188c31c 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Dec  1 09:42:33 compute-0 nova_compute[189491]: 2025-12-01 09:42:33.324 189495 DEBUG oslo_concurrency.processutils [None req-c8442ce9-a144-480d-8614-19a73188c31c 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/bfffb0fe9ffb8885e11e7a8e92aeafe5ed4e87fd.part /var/lib/nova/instances/_base/bfffb0fe9ffb8885e11e7a8e92aeafe5ed4e87fd.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:42:33 compute-0 nova_compute[189491]: 2025-12-01 09:42:33.345 189495 DEBUG nova.network.neutron [None req-2f0731c4-fe5c-4999-81fd-54e75f32dfb4 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] [instance: cd1ac331-c146-4eb5-bc53-42a82dd3467b] Successfully created port: 7284339c-1e96-403f-9c31-171c5b077ec6 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  1 09:42:33 compute-0 nova_compute[189491]: 2025-12-01 09:42:33.561 189495 DEBUG oslo_concurrency.processutils [None req-c8442ce9-a144-480d-8614-19a73188c31c 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/bfffb0fe9ffb8885e11e7a8e92aeafe5ed4e87fd.part /var/lib/nova/instances/_base/bfffb0fe9ffb8885e11e7a8e92aeafe5ed4e87fd.converted" returned: 0 in 0.237s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:42:33 compute-0 nova_compute[189491]: 2025-12-01 09:42:33.571 189495 DEBUG oslo_concurrency.processutils [None req-c8442ce9-a144-480d-8614-19a73188c31c 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bfffb0fe9ffb8885e11e7a8e92aeafe5ed4e87fd.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:42:33 compute-0 nova_compute[189491]: 2025-12-01 09:42:33.636 189495 DEBUG oslo_concurrency.processutils [None req-c8442ce9-a144-480d-8614-19a73188c31c 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bfffb0fe9ffb8885e11e7a8e92aeafe5ed4e87fd.converted --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:42:33 compute-0 nova_compute[189491]: 2025-12-01 09:42:33.639 189495 DEBUG oslo_concurrency.lockutils [None req-c8442ce9-a144-480d-8614-19a73188c31c 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] Lock "bfffb0fe9ffb8885e11e7a8e92aeafe5ed4e87fd" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 4.303s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:42:33 compute-0 nova_compute[189491]: 2025-12-01 09:42:33.666 189495 DEBUG oslo_concurrency.lockutils [None req-2f0731c4-fe5c-4999-81fd-54e75f32dfb4 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] Lock "bfffb0fe9ffb8885e11e7a8e92aeafe5ed4e87fd" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 4.076s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:42:33 compute-0 nova_compute[189491]: 2025-12-01 09:42:33.667 189495 DEBUG oslo_concurrency.lockutils [None req-2f0731c4-fe5c-4999-81fd-54e75f32dfb4 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] Lock "bfffb0fe9ffb8885e11e7a8e92aeafe5ed4e87fd" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:42:33 compute-0 nova_compute[189491]: 2025-12-01 09:42:33.690 189495 DEBUG oslo_concurrency.processutils [None req-c8442ce9-a144-480d-8614-19a73188c31c 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bfffb0fe9ffb8885e11e7a8e92aeafe5ed4e87fd --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:42:33 compute-0 nova_compute[189491]: 2025-12-01 09:42:33.707 189495 DEBUG oslo_concurrency.processutils [None req-2f0731c4-fe5c-4999-81fd-54e75f32dfb4 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bfffb0fe9ffb8885e11e7a8e92aeafe5ed4e87fd --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:42:33 compute-0 nova_compute[189491]: 2025-12-01 09:42:33.754 189495 DEBUG oslo_concurrency.processutils [None req-c8442ce9-a144-480d-8614-19a73188c31c 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bfffb0fe9ffb8885e11e7a8e92aeafe5ed4e87fd --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:42:33 compute-0 nova_compute[189491]: 2025-12-01 09:42:33.756 189495 DEBUG oslo_concurrency.lockutils [None req-c8442ce9-a144-480d-8614-19a73188c31c 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] Acquiring lock "bfffb0fe9ffb8885e11e7a8e92aeafe5ed4e87fd" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:42:33 compute-0 nova_compute[189491]: 2025-12-01 09:42:33.757 189495 DEBUG oslo_concurrency.lockutils [None req-c8442ce9-a144-480d-8614-19a73188c31c 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] Lock "bfffb0fe9ffb8885e11e7a8e92aeafe5ed4e87fd" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:42:33 compute-0 nova_compute[189491]: 2025-12-01 09:42:33.782 189495 DEBUG oslo_concurrency.processutils [None req-c8442ce9-a144-480d-8614-19a73188c31c 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bfffb0fe9ffb8885e11e7a8e92aeafe5ed4e87fd --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:42:33 compute-0 nova_compute[189491]: 2025-12-01 09:42:33.800 189495 DEBUG oslo_concurrency.processutils [None req-2f0731c4-fe5c-4999-81fd-54e75f32dfb4 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bfffb0fe9ffb8885e11e7a8e92aeafe5ed4e87fd --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:42:33 compute-0 nova_compute[189491]: 2025-12-01 09:42:33.801 189495 DEBUG oslo_concurrency.lockutils [None req-2f0731c4-fe5c-4999-81fd-54e75f32dfb4 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] Acquiring lock "bfffb0fe9ffb8885e11e7a8e92aeafe5ed4e87fd" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:42:33 compute-0 nova_compute[189491]: 2025-12-01 09:42:33.839 189495 DEBUG oslo_concurrency.processutils [None req-c8442ce9-a144-480d-8614-19a73188c31c 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bfffb0fe9ffb8885e11e7a8e92aeafe5ed4e87fd --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:42:33 compute-0 nova_compute[189491]: 2025-12-01 09:42:33.840 189495 DEBUG oslo_concurrency.processutils [None req-c8442ce9-a144-480d-8614-19a73188c31c 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/bfffb0fe9ffb8885e11e7a8e92aeafe5ed4e87fd,backing_fmt=raw /var/lib/nova/instances/38643437-7822-4834-8301-02d3402cad15/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:42:33 compute-0 nova_compute[189491]: 2025-12-01 09:42:33.882 189495 DEBUG oslo_concurrency.processutils [None req-c8442ce9-a144-480d-8614-19a73188c31c 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/bfffb0fe9ffb8885e11e7a8e92aeafe5ed4e87fd,backing_fmt=raw /var/lib/nova/instances/38643437-7822-4834-8301-02d3402cad15/disk 1073741824" returned: 0 in 0.042s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:42:33 compute-0 nova_compute[189491]: 2025-12-01 09:42:33.883 189495 DEBUG oslo_concurrency.lockutils [None req-c8442ce9-a144-480d-8614-19a73188c31c 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] Lock "bfffb0fe9ffb8885e11e7a8e92aeafe5ed4e87fd" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.126s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:42:33 compute-0 nova_compute[189491]: 2025-12-01 09:42:33.884 189495 DEBUG oslo_concurrency.processutils [None req-c8442ce9-a144-480d-8614-19a73188c31c 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bfffb0fe9ffb8885e11e7a8e92aeafe5ed4e87fd --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:42:33 compute-0 nova_compute[189491]: 2025-12-01 09:42:33.904 189495 DEBUG oslo_concurrency.lockutils [None req-2f0731c4-fe5c-4999-81fd-54e75f32dfb4 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] Lock "bfffb0fe9ffb8885e11e7a8e92aeafe5ed4e87fd" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.103s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:42:33 compute-0 nova_compute[189491]: 2025-12-01 09:42:33.926 189495 DEBUG oslo_concurrency.processutils [None req-2f0731c4-fe5c-4999-81fd-54e75f32dfb4 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bfffb0fe9ffb8885e11e7a8e92aeafe5ed4e87fd --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:42:33 compute-0 nova_compute[189491]: 2025-12-01 09:42:33.964 189495 DEBUG oslo_concurrency.processutils [None req-c8442ce9-a144-480d-8614-19a73188c31c 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bfffb0fe9ffb8885e11e7a8e92aeafe5ed4e87fd --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:42:33 compute-0 nova_compute[189491]: 2025-12-01 09:42:33.965 189495 DEBUG nova.virt.disk.api [None req-c8442ce9-a144-480d-8614-19a73188c31c 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] Checking if we can resize image /var/lib/nova/instances/38643437-7822-4834-8301-02d3402cad15/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Dec  1 09:42:33 compute-0 nova_compute[189491]: 2025-12-01 09:42:33.965 189495 DEBUG oslo_concurrency.processutils [None req-c8442ce9-a144-480d-8614-19a73188c31c 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/38643437-7822-4834-8301-02d3402cad15/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:42:34 compute-0 nova_compute[189491]: 2025-12-01 09:42:34.012 189495 DEBUG oslo_concurrency.processutils [None req-2f0731c4-fe5c-4999-81fd-54e75f32dfb4 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bfffb0fe9ffb8885e11e7a8e92aeafe5ed4e87fd --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:42:34 compute-0 nova_compute[189491]: 2025-12-01 09:42:34.013 189495 DEBUG oslo_concurrency.processutils [None req-2f0731c4-fe5c-4999-81fd-54e75f32dfb4 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/bfffb0fe9ffb8885e11e7a8e92aeafe5ed4e87fd,backing_fmt=raw /var/lib/nova/instances/cd1ac331-c146-4eb5-bc53-42a82dd3467b/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:42:34 compute-0 nova_compute[189491]: 2025-12-01 09:42:34.029 189495 DEBUG oslo_concurrency.processutils [None req-c8442ce9-a144-480d-8614-19a73188c31c 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/38643437-7822-4834-8301-02d3402cad15/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:42:34 compute-0 nova_compute[189491]: 2025-12-01 09:42:34.030 189495 DEBUG nova.virt.disk.api [None req-c8442ce9-a144-480d-8614-19a73188c31c 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] Cannot resize image /var/lib/nova/instances/38643437-7822-4834-8301-02d3402cad15/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Dec  1 09:42:34 compute-0 nova_compute[189491]: 2025-12-01 09:42:34.031 189495 DEBUG nova.objects.instance [None req-c8442ce9-a144-480d-8614-19a73188c31c 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] Lazy-loading 'migration_context' on Instance uuid 38643437-7822-4834-8301-02d3402cad15 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 09:42:34 compute-0 nova_compute[189491]: 2025-12-01 09:42:34.054 189495 DEBUG nova.virt.libvirt.driver [None req-c8442ce9-a144-480d-8614-19a73188c31c 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] [instance: 38643437-7822-4834-8301-02d3402cad15] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  1 09:42:34 compute-0 nova_compute[189491]: 2025-12-01 09:42:34.055 189495 DEBUG nova.virt.libvirt.driver [None req-c8442ce9-a144-480d-8614-19a73188c31c 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] [instance: 38643437-7822-4834-8301-02d3402cad15] Ensure instance console log exists: /var/lib/nova/instances/38643437-7822-4834-8301-02d3402cad15/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  1 09:42:34 compute-0 nova_compute[189491]: 2025-12-01 09:42:34.055 189495 DEBUG oslo_concurrency.lockutils [None req-c8442ce9-a144-480d-8614-19a73188c31c 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:42:34 compute-0 nova_compute[189491]: 2025-12-01 09:42:34.056 189495 DEBUG oslo_concurrency.lockutils [None req-c8442ce9-a144-480d-8614-19a73188c31c 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:42:34 compute-0 nova_compute[189491]: 2025-12-01 09:42:34.056 189495 DEBUG oslo_concurrency.lockutils [None req-c8442ce9-a144-480d-8614-19a73188c31c 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:42:34 compute-0 nova_compute[189491]: 2025-12-01 09:42:34.057 189495 DEBUG oslo_concurrency.processutils [None req-2f0731c4-fe5c-4999-81fd-54e75f32dfb4 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/bfffb0fe9ffb8885e11e7a8e92aeafe5ed4e87fd,backing_fmt=raw /var/lib/nova/instances/cd1ac331-c146-4eb5-bc53-42a82dd3467b/disk 1073741824" returned: 0 in 0.044s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:42:34 compute-0 nova_compute[189491]: 2025-12-01 09:42:34.057 189495 DEBUG oslo_concurrency.lockutils [None req-2f0731c4-fe5c-4999-81fd-54e75f32dfb4 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] Lock "bfffb0fe9ffb8885e11e7a8e92aeafe5ed4e87fd" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.153s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:42:34 compute-0 nova_compute[189491]: 2025-12-01 09:42:34.058 189495 DEBUG oslo_concurrency.processutils [None req-2f0731c4-fe5c-4999-81fd-54e75f32dfb4 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bfffb0fe9ffb8885e11e7a8e92aeafe5ed4e87fd --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:42:34 compute-0 nova_compute[189491]: 2025-12-01 09:42:34.119 189495 DEBUG oslo_concurrency.processutils [None req-2f0731c4-fe5c-4999-81fd-54e75f32dfb4 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bfffb0fe9ffb8885e11e7a8e92aeafe5ed4e87fd --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:42:34 compute-0 nova_compute[189491]: 2025-12-01 09:42:34.120 189495 DEBUG nova.virt.disk.api [None req-2f0731c4-fe5c-4999-81fd-54e75f32dfb4 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] Checking if we can resize image /var/lib/nova/instances/cd1ac331-c146-4eb5-bc53-42a82dd3467b/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Dec  1 09:42:34 compute-0 nova_compute[189491]: 2025-12-01 09:42:34.120 189495 DEBUG oslo_concurrency.processutils [None req-2f0731c4-fe5c-4999-81fd-54e75f32dfb4 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cd1ac331-c146-4eb5-bc53-42a82dd3467b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:42:34 compute-0 nova_compute[189491]: 2025-12-01 09:42:34.218 189495 DEBUG oslo_concurrency.processutils [None req-2f0731c4-fe5c-4999-81fd-54e75f32dfb4 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cd1ac331-c146-4eb5-bc53-42a82dd3467b/disk --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:42:34 compute-0 nova_compute[189491]: 2025-12-01 09:42:34.219 189495 DEBUG nova.virt.disk.api [None req-2f0731c4-fe5c-4999-81fd-54e75f32dfb4 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] Cannot resize image /var/lib/nova/instances/cd1ac331-c146-4eb5-bc53-42a82dd3467b/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Dec  1 09:42:34 compute-0 nova_compute[189491]: 2025-12-01 09:42:34.220 189495 DEBUG nova.objects.instance [None req-2f0731c4-fe5c-4999-81fd-54e75f32dfb4 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] Lazy-loading 'migration_context' on Instance uuid cd1ac331-c146-4eb5-bc53-42a82dd3467b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 09:42:34 compute-0 nova_compute[189491]: 2025-12-01 09:42:34.236 189495 DEBUG nova.virt.libvirt.driver [None req-2f0731c4-fe5c-4999-81fd-54e75f32dfb4 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] [instance: cd1ac331-c146-4eb5-bc53-42a82dd3467b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  1 09:42:34 compute-0 nova_compute[189491]: 2025-12-01 09:42:34.237 189495 DEBUG nova.virt.libvirt.driver [None req-2f0731c4-fe5c-4999-81fd-54e75f32dfb4 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] [instance: cd1ac331-c146-4eb5-bc53-42a82dd3467b] Ensure instance console log exists: /var/lib/nova/instances/cd1ac331-c146-4eb5-bc53-42a82dd3467b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  1 09:42:34 compute-0 nova_compute[189491]: 2025-12-01 09:42:34.238 189495 DEBUG oslo_concurrency.lockutils [None req-2f0731c4-fe5c-4999-81fd-54e75f32dfb4 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:42:34 compute-0 nova_compute[189491]: 2025-12-01 09:42:34.241 189495 DEBUG oslo_concurrency.lockutils [None req-2f0731c4-fe5c-4999-81fd-54e75f32dfb4 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:42:34 compute-0 nova_compute[189491]: 2025-12-01 09:42:34.241 189495 DEBUG oslo_concurrency.lockutils [None req-2f0731c4-fe5c-4999-81fd-54e75f32dfb4 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:42:34 compute-0 podman[251738]: 2025-12-01 09:42:34.695686679 +0000 UTC m=+0.067134972 container health_status f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Dec  1 09:42:34 compute-0 podman[251737]: 2025-12-01 09:42:34.719132293 +0000 UTC m=+0.094843852 container health_status 110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, vcs-type=git, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, distribution-scope=public, release=1755695350, name=ubi9-minimal, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc.)
Dec  1 09:42:35 compute-0 nova_compute[189491]: 2025-12-01 09:42:35.186 189495 DEBUG nova.network.neutron [None req-c8442ce9-a144-480d-8614-19a73188c31c 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] [instance: 38643437-7822-4834-8301-02d3402cad15] Successfully updated port: 7d0f49f6-e0e1-44b1-be36-fa4df3220ddb _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  1 09:42:35 compute-0 nova_compute[189491]: 2025-12-01 09:42:35.188 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:42:35 compute-0 nova_compute[189491]: 2025-12-01 09:42:35.208 189495 DEBUG oslo_concurrency.lockutils [None req-c8442ce9-a144-480d-8614-19a73188c31c 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] Acquiring lock "refresh_cache-38643437-7822-4834-8301-02d3402cad15" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 09:42:35 compute-0 nova_compute[189491]: 2025-12-01 09:42:35.208 189495 DEBUG oslo_concurrency.lockutils [None req-c8442ce9-a144-480d-8614-19a73188c31c 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] Acquired lock "refresh_cache-38643437-7822-4834-8301-02d3402cad15" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 09:42:35 compute-0 nova_compute[189491]: 2025-12-01 09:42:35.209 189495 DEBUG nova.network.neutron [None req-c8442ce9-a144-480d-8614-19a73188c31c 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] [instance: 38643437-7822-4834-8301-02d3402cad15] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  1 09:42:35 compute-0 nova_compute[189491]: 2025-12-01 09:42:35.285 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:42:35 compute-0 nova_compute[189491]: 2025-12-01 09:42:35.328 189495 DEBUG nova.compute.manager [req-80948ec6-cf95-4bc4-98c9-080195c71b5b req-22e9412f-f33b-4693-862f-a56dfcdfe54e ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 38643437-7822-4834-8301-02d3402cad15] Received event network-changed-7d0f49f6-e0e1-44b1-be36-fa4df3220ddb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 09:42:35 compute-0 nova_compute[189491]: 2025-12-01 09:42:35.329 189495 DEBUG nova.compute.manager [req-80948ec6-cf95-4bc4-98c9-080195c71b5b req-22e9412f-f33b-4693-862f-a56dfcdfe54e ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 38643437-7822-4834-8301-02d3402cad15] Refreshing instance network info cache due to event network-changed-7d0f49f6-e0e1-44b1-be36-fa4df3220ddb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  1 09:42:35 compute-0 nova_compute[189491]: 2025-12-01 09:42:35.329 189495 DEBUG oslo_concurrency.lockutils [req-80948ec6-cf95-4bc4-98c9-080195c71b5b req-22e9412f-f33b-4693-862f-a56dfcdfe54e ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Acquiring lock "refresh_cache-38643437-7822-4834-8301-02d3402cad15" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 09:42:35 compute-0 nova_compute[189491]: 2025-12-01 09:42:35.366 189495 DEBUG nova.network.neutron [None req-2f0731c4-fe5c-4999-81fd-54e75f32dfb4 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] [instance: cd1ac331-c146-4eb5-bc53-42a82dd3467b] Successfully updated port: 7284339c-1e96-403f-9c31-171c5b077ec6 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  1 09:42:35 compute-0 nova_compute[189491]: 2025-12-01 09:42:35.382 189495 DEBUG oslo_concurrency.lockutils [None req-2f0731c4-fe5c-4999-81fd-54e75f32dfb4 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] Acquiring lock "refresh_cache-cd1ac331-c146-4eb5-bc53-42a82dd3467b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 09:42:35 compute-0 nova_compute[189491]: 2025-12-01 09:42:35.382 189495 DEBUG oslo_concurrency.lockutils [None req-2f0731c4-fe5c-4999-81fd-54e75f32dfb4 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] Acquired lock "refresh_cache-cd1ac331-c146-4eb5-bc53-42a82dd3467b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 09:42:35 compute-0 nova_compute[189491]: 2025-12-01 09:42:35.382 189495 DEBUG nova.network.neutron [None req-2f0731c4-fe5c-4999-81fd-54e75f32dfb4 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] [instance: cd1ac331-c146-4eb5-bc53-42a82dd3467b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  1 09:42:35 compute-0 nova_compute[189491]: 2025-12-01 09:42:35.448 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:42:35 compute-0 nova_compute[189491]: 2025-12-01 09:42:35.482 189495 DEBUG nova.network.neutron [None req-c8442ce9-a144-480d-8614-19a73188c31c 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] [instance: 38643437-7822-4834-8301-02d3402cad15] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  1 09:42:35 compute-0 nova_compute[189491]: 2025-12-01 09:42:35.644 189495 DEBUG nova.network.neutron [None req-2f0731c4-fe5c-4999-81fd-54e75f32dfb4 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] [instance: cd1ac331-c146-4eb5-bc53-42a82dd3467b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  1 09:42:37 compute-0 nova_compute[189491]: 2025-12-01 09:42:37.354 189495 DEBUG nova.network.neutron [None req-c8442ce9-a144-480d-8614-19a73188c31c 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] [instance: 38643437-7822-4834-8301-02d3402cad15] Updating instance_info_cache with network_info: [{"id": "7d0f49f6-e0e1-44b1-be36-fa4df3220ddb", "address": "fa:16:3e:ac:0b:ad", "network": {"id": "8f64018c-1aea-4b7d-b0f4-4ea18afaaeb0", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-168730074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d7764856ebb94acbaa0b40cbbf09cb3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d0f49f6-e0", "ovs_interfaceid": "7d0f49f6-e0e1-44b1-be36-fa4df3220ddb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 09:42:37 compute-0 nova_compute[189491]: 2025-12-01 09:42:37.373 189495 DEBUG oslo_concurrency.lockutils [None req-c8442ce9-a144-480d-8614-19a73188c31c 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] Releasing lock "refresh_cache-38643437-7822-4834-8301-02d3402cad15" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 09:42:37 compute-0 nova_compute[189491]: 2025-12-01 09:42:37.373 189495 DEBUG nova.compute.manager [None req-c8442ce9-a144-480d-8614-19a73188c31c 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] [instance: 38643437-7822-4834-8301-02d3402cad15] Instance network_info: |[{"id": "7d0f49f6-e0e1-44b1-be36-fa4df3220ddb", "address": "fa:16:3e:ac:0b:ad", "network": {"id": "8f64018c-1aea-4b7d-b0f4-4ea18afaaeb0", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-168730074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d7764856ebb94acbaa0b40cbbf09cb3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d0f49f6-e0", "ovs_interfaceid": "7d0f49f6-e0e1-44b1-be36-fa4df3220ddb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  1 09:42:37 compute-0 nova_compute[189491]: 2025-12-01 09:42:37.374 189495 DEBUG oslo_concurrency.lockutils [req-80948ec6-cf95-4bc4-98c9-080195c71b5b req-22e9412f-f33b-4693-862f-a56dfcdfe54e ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Acquired lock "refresh_cache-38643437-7822-4834-8301-02d3402cad15" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 09:42:37 compute-0 nova_compute[189491]: 2025-12-01 09:42:37.374 189495 DEBUG nova.network.neutron [req-80948ec6-cf95-4bc4-98c9-080195c71b5b req-22e9412f-f33b-4693-862f-a56dfcdfe54e ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 38643437-7822-4834-8301-02d3402cad15] Refreshing network info cache for port 7d0f49f6-e0e1-44b1-be36-fa4df3220ddb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  1 09:42:37 compute-0 nova_compute[189491]: 2025-12-01 09:42:37.377 189495 DEBUG nova.virt.libvirt.driver [None req-c8442ce9-a144-480d-8614-19a73188c31c 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] [instance: 38643437-7822-4834-8301-02d3402cad15] Start _get_guest_xml network_info=[{"id": "7d0f49f6-e0e1-44b1-be36-fa4df3220ddb", "address": "fa:16:3e:ac:0b:ad", "network": {"id": "8f64018c-1aea-4b7d-b0f4-4ea18afaaeb0", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-168730074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d7764856ebb94acbaa0b40cbbf09cb3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d0f49f6-e0", "ovs_interfaceid": "7d0f49f6-e0e1-44b1-be36-fa4df3220ddb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-01T09:41:33Z,direct_url=<?>,disk_format='qcow2',id=7ddeffd1-d06f-4a46-9e41-114974daa90e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='fac95b8a995a4174bfa966a8d9d9aa01',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-01T09:41:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'guest_format': None, 'encryption_options': None, 'device_type': 'disk', 'encryption_secret_uuid': None, 'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'image_id': '7ddeffd1-d06f-4a46-9e41-114974daa90e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  1 09:42:37 compute-0 nova_compute[189491]: 2025-12-01 09:42:37.385 189495 WARNING nova.virt.libvirt.driver [None req-c8442ce9-a144-480d-8614-19a73188c31c 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 09:42:37 compute-0 nova_compute[189491]: 2025-12-01 09:42:37.395 189495 DEBUG nova.virt.libvirt.host [None req-c8442ce9-a144-480d-8614-19a73188c31c 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  1 09:42:37 compute-0 nova_compute[189491]: 2025-12-01 09:42:37.396 189495 DEBUG nova.virt.libvirt.host [None req-c8442ce9-a144-480d-8614-19a73188c31c 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  1 09:42:37 compute-0 nova_compute[189491]: 2025-12-01 09:42:37.400 189495 DEBUG nova.virt.libvirt.host [None req-c8442ce9-a144-480d-8614-19a73188c31c 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  1 09:42:37 compute-0 nova_compute[189491]: 2025-12-01 09:42:37.401 189495 DEBUG nova.virt.libvirt.host [None req-c8442ce9-a144-480d-8614-19a73188c31c 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  1 09:42:37 compute-0 nova_compute[189491]: 2025-12-01 09:42:37.401 189495 DEBUG nova.virt.libvirt.driver [None req-c8442ce9-a144-480d-8614-19a73188c31c 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  1 09:42:37 compute-0 nova_compute[189491]: 2025-12-01 09:42:37.402 189495 DEBUG nova.virt.hardware [None req-c8442ce9-a144-480d-8614-19a73188c31c 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-01T09:41:32Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='422f041c-a187-4aa2-8167-37f3eb0e89c2',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-01T09:41:33Z,direct_url=<?>,disk_format='qcow2',id=7ddeffd1-d06f-4a46-9e41-114974daa90e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='fac95b8a995a4174bfa966a8d9d9aa01',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-01T09:41:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  1 09:42:37 compute-0 nova_compute[189491]: 2025-12-01 09:42:37.402 189495 DEBUG nova.virt.hardware [None req-c8442ce9-a144-480d-8614-19a73188c31c 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  1 09:42:37 compute-0 nova_compute[189491]: 2025-12-01 09:42:37.402 189495 DEBUG nova.virt.hardware [None req-c8442ce9-a144-480d-8614-19a73188c31c 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  1 09:42:37 compute-0 nova_compute[189491]: 2025-12-01 09:42:37.403 189495 DEBUG nova.virt.hardware [None req-c8442ce9-a144-480d-8614-19a73188c31c 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  1 09:42:37 compute-0 nova_compute[189491]: 2025-12-01 09:42:37.403 189495 DEBUG nova.virt.hardware [None req-c8442ce9-a144-480d-8614-19a73188c31c 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  1 09:42:37 compute-0 nova_compute[189491]: 2025-12-01 09:42:37.404 189495 DEBUG nova.virt.hardware [None req-c8442ce9-a144-480d-8614-19a73188c31c 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  1 09:42:37 compute-0 nova_compute[189491]: 2025-12-01 09:42:37.404 189495 DEBUG nova.virt.hardware [None req-c8442ce9-a144-480d-8614-19a73188c31c 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  1 09:42:37 compute-0 nova_compute[189491]: 2025-12-01 09:42:37.404 189495 DEBUG nova.virt.hardware [None req-c8442ce9-a144-480d-8614-19a73188c31c 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  1 09:42:37 compute-0 nova_compute[189491]: 2025-12-01 09:42:37.404 189495 DEBUG nova.virt.hardware [None req-c8442ce9-a144-480d-8614-19a73188c31c 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  1 09:42:37 compute-0 nova_compute[189491]: 2025-12-01 09:42:37.405 189495 DEBUG nova.virt.hardware [None req-c8442ce9-a144-480d-8614-19a73188c31c 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  1 09:42:37 compute-0 nova_compute[189491]: 2025-12-01 09:42:37.405 189495 DEBUG nova.virt.hardware [None req-c8442ce9-a144-480d-8614-19a73188c31c 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  1 09:42:37 compute-0 nova_compute[189491]: 2025-12-01 09:42:37.409 189495 DEBUG nova.virt.libvirt.vif [None req-c8442ce9-a144-480d-8614-19a73188c31c 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-01T09:42:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-1816623560',display_name='tempest-AttachInterfacesUnderV243Test-server-1816623560',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-1816623560',id=6,image_ref='7ddeffd1-d06f-4a46-9e41-114974daa90e',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIDov5y9OwhjjL8WPI8jxvuxbUQv67pnksuH/4lF8J8r1S9hI5ZeobpiFpyHKcxVVEV1lVkVZ97drOsKr7ctk5ApG1BaxbqF45NStb7lJLgZLvMHh2SYNMaXiiNfpkaIOQ==',key_name='tempest-keypair-1935929616',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d7764856ebb94acbaa0b40cbbf09cb3d',ramdisk_id='',reservation_id='r-uw59cg9l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7ddeffd1-d06f-4a46-9e41-114974daa90e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesUnderV243Test-820336300',owner_user_name='tempest-AttachInterfacesUnderV243Test-820336300-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-01T09:42:29Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='688e0c65604244fb9d423018bc88d238',uuid=38643437-7822-4834-8301-02d3402cad15,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7d0f49f6-e0e1-44b1-be36-fa4df3220ddb", "address": "fa:16:3e:ac:0b:ad", "network": {"id": "8f64018c-1aea-4b7d-b0f4-4ea18afaaeb0", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-168730074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d7764856ebb94acbaa0b40cbbf09cb3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d0f49f6-e0", "ovs_interfaceid": "7d0f49f6-e0e1-44b1-be36-fa4df3220ddb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  1 09:42:37 compute-0 nova_compute[189491]: 2025-12-01 09:42:37.410 189495 DEBUG nova.network.os_vif_util [None req-c8442ce9-a144-480d-8614-19a73188c31c 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] Converting VIF {"id": "7d0f49f6-e0e1-44b1-be36-fa4df3220ddb", "address": "fa:16:3e:ac:0b:ad", "network": {"id": "8f64018c-1aea-4b7d-b0f4-4ea18afaaeb0", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-168730074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d7764856ebb94acbaa0b40cbbf09cb3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d0f49f6-e0", "ovs_interfaceid": "7d0f49f6-e0e1-44b1-be36-fa4df3220ddb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 09:42:37 compute-0 nova_compute[189491]: 2025-12-01 09:42:37.411 189495 DEBUG nova.network.os_vif_util [None req-c8442ce9-a144-480d-8614-19a73188c31c 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ac:0b:ad,bridge_name='br-int',has_traffic_filtering=True,id=7d0f49f6-e0e1-44b1-be36-fa4df3220ddb,network=Network(8f64018c-1aea-4b7d-b0f4-4ea18afaaeb0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7d0f49f6-e0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 09:42:37 compute-0 nova_compute[189491]: 2025-12-01 09:42:37.412 189495 DEBUG nova.objects.instance [None req-c8442ce9-a144-480d-8614-19a73188c31c 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] Lazy-loading 'pci_devices' on Instance uuid 38643437-7822-4834-8301-02d3402cad15 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 09:42:37 compute-0 nova_compute[189491]: 2025-12-01 09:42:37.430 189495 DEBUG nova.virt.libvirt.driver [None req-c8442ce9-a144-480d-8614-19a73188c31c 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] [instance: 38643437-7822-4834-8301-02d3402cad15] End _get_guest_xml xml=<domain type="kvm">
Dec  1 09:42:37 compute-0 nova_compute[189491]:  <uuid>38643437-7822-4834-8301-02d3402cad15</uuid>
Dec  1 09:42:37 compute-0 nova_compute[189491]:  <name>instance-00000006</name>
Dec  1 09:42:37 compute-0 nova_compute[189491]:  <memory>131072</memory>
Dec  1 09:42:37 compute-0 nova_compute[189491]:  <vcpu>1</vcpu>
Dec  1 09:42:37 compute-0 nova_compute[189491]:  <metadata>
Dec  1 09:42:37 compute-0 nova_compute[189491]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  1 09:42:37 compute-0 nova_compute[189491]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  1 09:42:37 compute-0 nova_compute[189491]:      <nova:name>tempest-AttachInterfacesUnderV243Test-server-1816623560</nova:name>
Dec  1 09:42:37 compute-0 nova_compute[189491]:      <nova:creationTime>2025-12-01 09:42:37</nova:creationTime>
Dec  1 09:42:37 compute-0 nova_compute[189491]:      <nova:flavor name="m1.nano">
Dec  1 09:42:37 compute-0 nova_compute[189491]:        <nova:memory>128</nova:memory>
Dec  1 09:42:37 compute-0 nova_compute[189491]:        <nova:disk>1</nova:disk>
Dec  1 09:42:37 compute-0 nova_compute[189491]:        <nova:swap>0</nova:swap>
Dec  1 09:42:37 compute-0 nova_compute[189491]:        <nova:ephemeral>0</nova:ephemeral>
Dec  1 09:42:37 compute-0 nova_compute[189491]:        <nova:vcpus>1</nova:vcpus>
Dec  1 09:42:37 compute-0 nova_compute[189491]:      </nova:flavor>
Dec  1 09:42:37 compute-0 nova_compute[189491]:      <nova:owner>
Dec  1 09:42:37 compute-0 nova_compute[189491]:        <nova:user uuid="688e0c65604244fb9d423018bc88d238">tempest-AttachInterfacesUnderV243Test-820336300-project-member</nova:user>
Dec  1 09:42:37 compute-0 nova_compute[189491]:        <nova:project uuid="d7764856ebb94acbaa0b40cbbf09cb3d">tempest-AttachInterfacesUnderV243Test-820336300</nova:project>
Dec  1 09:42:37 compute-0 nova_compute[189491]:      </nova:owner>
Dec  1 09:42:37 compute-0 nova_compute[189491]:      <nova:root type="image" uuid="7ddeffd1-d06f-4a46-9e41-114974daa90e"/>
Dec  1 09:42:37 compute-0 nova_compute[189491]:      <nova:ports>
Dec  1 09:42:37 compute-0 nova_compute[189491]:        <nova:port uuid="7d0f49f6-e0e1-44b1-be36-fa4df3220ddb">
Dec  1 09:42:37 compute-0 nova_compute[189491]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Dec  1 09:42:37 compute-0 nova_compute[189491]:        </nova:port>
Dec  1 09:42:37 compute-0 nova_compute[189491]:      </nova:ports>
Dec  1 09:42:37 compute-0 nova_compute[189491]:    </nova:instance>
Dec  1 09:42:37 compute-0 nova_compute[189491]:  </metadata>
Dec  1 09:42:37 compute-0 nova_compute[189491]:  <sysinfo type="smbios">
Dec  1 09:42:37 compute-0 nova_compute[189491]:    <system>
Dec  1 09:42:37 compute-0 nova_compute[189491]:      <entry name="manufacturer">RDO</entry>
Dec  1 09:42:37 compute-0 nova_compute[189491]:      <entry name="product">OpenStack Compute</entry>
Dec  1 09:42:37 compute-0 nova_compute[189491]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  1 09:42:37 compute-0 nova_compute[189491]:      <entry name="serial">38643437-7822-4834-8301-02d3402cad15</entry>
Dec  1 09:42:37 compute-0 nova_compute[189491]:      <entry name="uuid">38643437-7822-4834-8301-02d3402cad15</entry>
Dec  1 09:42:37 compute-0 nova_compute[189491]:      <entry name="family">Virtual Machine</entry>
Dec  1 09:42:37 compute-0 nova_compute[189491]:    </system>
Dec  1 09:42:37 compute-0 nova_compute[189491]:  </sysinfo>
Dec  1 09:42:37 compute-0 nova_compute[189491]:  <os>
Dec  1 09:42:37 compute-0 nova_compute[189491]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  1 09:42:37 compute-0 nova_compute[189491]:    <boot dev="hd"/>
Dec  1 09:42:37 compute-0 nova_compute[189491]:    <smbios mode="sysinfo"/>
Dec  1 09:42:37 compute-0 nova_compute[189491]:  </os>
Dec  1 09:42:37 compute-0 nova_compute[189491]:  <features>
Dec  1 09:42:37 compute-0 nova_compute[189491]:    <acpi/>
Dec  1 09:42:37 compute-0 nova_compute[189491]:    <apic/>
Dec  1 09:42:37 compute-0 nova_compute[189491]:    <vmcoreinfo/>
Dec  1 09:42:37 compute-0 nova_compute[189491]:  </features>
Dec  1 09:42:37 compute-0 nova_compute[189491]:  <clock offset="utc">
Dec  1 09:42:37 compute-0 nova_compute[189491]:    <timer name="pit" tickpolicy="delay"/>
Dec  1 09:42:37 compute-0 nova_compute[189491]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  1 09:42:37 compute-0 nova_compute[189491]:    <timer name="hpet" present="no"/>
Dec  1 09:42:37 compute-0 nova_compute[189491]:  </clock>
Dec  1 09:42:37 compute-0 nova_compute[189491]:  <cpu mode="host-model" match="exact">
Dec  1 09:42:37 compute-0 nova_compute[189491]:    <topology sockets="1" cores="1" threads="1"/>
Dec  1 09:42:37 compute-0 nova_compute[189491]:  </cpu>
Dec  1 09:42:37 compute-0 nova_compute[189491]:  <devices>
Dec  1 09:42:37 compute-0 nova_compute[189491]:    <disk type="file" device="disk">
Dec  1 09:42:37 compute-0 nova_compute[189491]:      <driver name="qemu" type="qcow2" cache="none"/>
Dec  1 09:42:37 compute-0 nova_compute[189491]:      <source file="/var/lib/nova/instances/38643437-7822-4834-8301-02d3402cad15/disk"/>
Dec  1 09:42:37 compute-0 nova_compute[189491]:      <target dev="vda" bus="virtio"/>
Dec  1 09:42:37 compute-0 nova_compute[189491]:    </disk>
Dec  1 09:42:37 compute-0 nova_compute[189491]:    <disk type="file" device="cdrom">
Dec  1 09:42:37 compute-0 nova_compute[189491]:      <driver name="qemu" type="raw" cache="none"/>
Dec  1 09:42:37 compute-0 nova_compute[189491]:      <source file="/var/lib/nova/instances/38643437-7822-4834-8301-02d3402cad15/disk.config"/>
Dec  1 09:42:37 compute-0 nova_compute[189491]:      <target dev="sda" bus="sata"/>
Dec  1 09:42:37 compute-0 nova_compute[189491]:    </disk>
Dec  1 09:42:37 compute-0 nova_compute[189491]:    <interface type="ethernet">
Dec  1 09:42:37 compute-0 nova_compute[189491]:      <mac address="fa:16:3e:ac:0b:ad"/>
Dec  1 09:42:37 compute-0 nova_compute[189491]:      <model type="virtio"/>
Dec  1 09:42:37 compute-0 nova_compute[189491]:      <driver name="vhost" rx_queue_size="512"/>
Dec  1 09:42:37 compute-0 nova_compute[189491]:      <mtu size="1442"/>
Dec  1 09:42:37 compute-0 nova_compute[189491]:      <target dev="tap7d0f49f6-e0"/>
Dec  1 09:42:37 compute-0 nova_compute[189491]:    </interface>
Dec  1 09:42:37 compute-0 nova_compute[189491]:    <serial type="pty">
Dec  1 09:42:37 compute-0 nova_compute[189491]:      <log file="/var/lib/nova/instances/38643437-7822-4834-8301-02d3402cad15/console.log" append="off"/>
Dec  1 09:42:37 compute-0 nova_compute[189491]:    </serial>
Dec  1 09:42:37 compute-0 nova_compute[189491]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  1 09:42:37 compute-0 nova_compute[189491]:    <video>
Dec  1 09:42:37 compute-0 nova_compute[189491]:      <model type="virtio"/>
Dec  1 09:42:37 compute-0 nova_compute[189491]:    </video>
Dec  1 09:42:37 compute-0 nova_compute[189491]:    <input type="tablet" bus="usb"/>
Dec  1 09:42:37 compute-0 nova_compute[189491]:    <rng model="virtio">
Dec  1 09:42:37 compute-0 nova_compute[189491]:      <backend model="random">/dev/urandom</backend>
Dec  1 09:42:37 compute-0 nova_compute[189491]:    </rng>
Dec  1 09:42:37 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root"/>
Dec  1 09:42:37 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:42:37 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:42:37 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:42:37 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:42:37 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:42:37 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:42:37 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:42:37 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:42:37 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:42:37 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:42:37 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:42:37 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:42:37 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:42:37 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:42:37 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:42:37 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:42:37 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:42:37 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:42:37 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:42:37 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:42:37 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:42:37 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:42:37 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:42:37 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:42:37 compute-0 nova_compute[189491]:    <controller type="usb" index="0"/>
Dec  1 09:42:37 compute-0 nova_compute[189491]:    <memballoon model="virtio">
Dec  1 09:42:37 compute-0 nova_compute[189491]:      <stats period="10"/>
Dec  1 09:42:37 compute-0 nova_compute[189491]:    </memballoon>
Dec  1 09:42:37 compute-0 nova_compute[189491]:  </devices>
Dec  1 09:42:37 compute-0 nova_compute[189491]: </domain>
Dec  1 09:42:37 compute-0 nova_compute[189491]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  1 09:42:37 compute-0 nova_compute[189491]: 2025-12-01 09:42:37.432 189495 DEBUG nova.compute.manager [None req-c8442ce9-a144-480d-8614-19a73188c31c 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] [instance: 38643437-7822-4834-8301-02d3402cad15] Preparing to wait for external event network-vif-plugged-7d0f49f6-e0e1-44b1-be36-fa4df3220ddb prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  1 09:42:37 compute-0 nova_compute[189491]: 2025-12-01 09:42:37.433 189495 DEBUG oslo_concurrency.lockutils [None req-c8442ce9-a144-480d-8614-19a73188c31c 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] Acquiring lock "38643437-7822-4834-8301-02d3402cad15-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:42:37 compute-0 nova_compute[189491]: 2025-12-01 09:42:37.433 189495 DEBUG oslo_concurrency.lockutils [None req-c8442ce9-a144-480d-8614-19a73188c31c 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] Lock "38643437-7822-4834-8301-02d3402cad15-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:42:37 compute-0 nova_compute[189491]: 2025-12-01 09:42:37.433 189495 DEBUG oslo_concurrency.lockutils [None req-c8442ce9-a144-480d-8614-19a73188c31c 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] Lock "38643437-7822-4834-8301-02d3402cad15-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:42:37 compute-0 nova_compute[189491]: 2025-12-01 09:42:37.434 189495 DEBUG nova.virt.libvirt.vif [None req-c8442ce9-a144-480d-8614-19a73188c31c 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-01T09:42:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-1816623560',display_name='tempest-AttachInterfacesUnderV243Test-server-1816623560',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-1816623560',id=6,image_ref='7ddeffd1-d06f-4a46-9e41-114974daa90e',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIDov5y9OwhjjL8WPI8jxvuxbUQv67pnksuH/4lF8J8r1S9hI5ZeobpiFpyHKcxVVEV1lVkVZ97drOsKr7ctk5ApG1BaxbqF45NStb7lJLgZLvMHh2SYNMaXiiNfpkaIOQ==',key_name='tempest-keypair-1935929616',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d7764856ebb94acbaa0b40cbbf09cb3d',ramdisk_id='',reservation_id='r-uw59cg9l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7ddeffd1-d06f-4a46-9e41-114974daa90e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesUnderV243Test-820336300',owner_user_name='tempest-AttachInterfacesUnderV243Test-820336300-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-01T09:42:29Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='688e0c65604244fb9d423018bc88d238',uuid=38643437-7822-4834-8301-02d3402cad15,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7d0f49f6-e0e1-44b1-be36-fa4df3220ddb", "address": "fa:16:3e:ac:0b:ad", "network": {"id": "8f64018c-1aea-4b7d-b0f4-4ea18afaaeb0", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-168730074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d7764856ebb94acbaa0b40cbbf09cb3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d0f49f6-e0", "ovs_interfaceid": "7d0f49f6-e0e1-44b1-be36-fa4df3220ddb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  1 09:42:37 compute-0 nova_compute[189491]: 2025-12-01 09:42:37.434 189495 DEBUG nova.network.os_vif_util [None req-c8442ce9-a144-480d-8614-19a73188c31c 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] Converting VIF {"id": "7d0f49f6-e0e1-44b1-be36-fa4df3220ddb", "address": "fa:16:3e:ac:0b:ad", "network": {"id": "8f64018c-1aea-4b7d-b0f4-4ea18afaaeb0", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-168730074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d7764856ebb94acbaa0b40cbbf09cb3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d0f49f6-e0", "ovs_interfaceid": "7d0f49f6-e0e1-44b1-be36-fa4df3220ddb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 09:42:37 compute-0 nova_compute[189491]: 2025-12-01 09:42:37.435 189495 DEBUG nova.network.os_vif_util [None req-c8442ce9-a144-480d-8614-19a73188c31c 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ac:0b:ad,bridge_name='br-int',has_traffic_filtering=True,id=7d0f49f6-e0e1-44b1-be36-fa4df3220ddb,network=Network(8f64018c-1aea-4b7d-b0f4-4ea18afaaeb0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7d0f49f6-e0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 09:42:37 compute-0 nova_compute[189491]: 2025-12-01 09:42:37.436 189495 DEBUG os_vif [None req-c8442ce9-a144-480d-8614-19a73188c31c 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ac:0b:ad,bridge_name='br-int',has_traffic_filtering=True,id=7d0f49f6-e0e1-44b1-be36-fa4df3220ddb,network=Network(8f64018c-1aea-4b7d-b0f4-4ea18afaaeb0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7d0f49f6-e0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  1 09:42:37 compute-0 nova_compute[189491]: 2025-12-01 09:42:37.436 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:42:37 compute-0 nova_compute[189491]: 2025-12-01 09:42:37.437 189495 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:42:37 compute-0 nova_compute[189491]: 2025-12-01 09:42:37.437 189495 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 09:42:37 compute-0 nova_compute[189491]: 2025-12-01 09:42:37.440 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:42:37 compute-0 nova_compute[189491]: 2025-12-01 09:42:37.441 189495 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7d0f49f6-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:42:37 compute-0 nova_compute[189491]: 2025-12-01 09:42:37.441 189495 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7d0f49f6-e0, col_values=(('external_ids', {'iface-id': '7d0f49f6-e0e1-44b1-be36-fa4df3220ddb', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ac:0b:ad', 'vm-uuid': '38643437-7822-4834-8301-02d3402cad15'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:42:37 compute-0 nova_compute[189491]: 2025-12-01 09:42:37.443 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:42:37 compute-0 NetworkManager[56318]: <info>  [1764582157.4450] manager: (tap7d0f49f6-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/33)
Dec  1 09:42:37 compute-0 nova_compute[189491]: 2025-12-01 09:42:37.445 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  1 09:42:37 compute-0 nova_compute[189491]: 2025-12-01 09:42:37.453 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:42:37 compute-0 nova_compute[189491]: 2025-12-01 09:42:37.455 189495 INFO os_vif [None req-c8442ce9-a144-480d-8614-19a73188c31c 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ac:0b:ad,bridge_name='br-int',has_traffic_filtering=True,id=7d0f49f6-e0e1-44b1-be36-fa4df3220ddb,network=Network(8f64018c-1aea-4b7d-b0f4-4ea18afaaeb0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7d0f49f6-e0')#033[00m
Dec  1 09:42:37 compute-0 nova_compute[189491]: 2025-12-01 09:42:37.472 189495 DEBUG nova.compute.manager [req-53ffca92-0924-4cc0-b2e8-40c1d603a909 req-f9934bda-4667-4582-9f65-e1a86748103d ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: cd1ac331-c146-4eb5-bc53-42a82dd3467b] Received event network-changed-7284339c-1e96-403f-9c31-171c5b077ec6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 09:42:37 compute-0 nova_compute[189491]: 2025-12-01 09:42:37.473 189495 DEBUG nova.compute.manager [req-53ffca92-0924-4cc0-b2e8-40c1d603a909 req-f9934bda-4667-4582-9f65-e1a86748103d ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: cd1ac331-c146-4eb5-bc53-42a82dd3467b] Refreshing instance network info cache due to event network-changed-7284339c-1e96-403f-9c31-171c5b077ec6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  1 09:42:37 compute-0 nova_compute[189491]: 2025-12-01 09:42:37.473 189495 DEBUG oslo_concurrency.lockutils [req-53ffca92-0924-4cc0-b2e8-40c1d603a909 req-f9934bda-4667-4582-9f65-e1a86748103d ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Acquiring lock "refresh_cache-cd1ac331-c146-4eb5-bc53-42a82dd3467b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 09:42:37 compute-0 nova_compute[189491]: 2025-12-01 09:42:37.484 189495 DEBUG nova.network.neutron [None req-2f0731c4-fe5c-4999-81fd-54e75f32dfb4 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] [instance: cd1ac331-c146-4eb5-bc53-42a82dd3467b] Updating instance_info_cache with network_info: [{"id": "7284339c-1e96-403f-9c31-171c5b077ec6", "address": "fa:16:3e:41:37:2d", "network": {"id": "fb3f5f49-3533-4792-93e2-e7e3702e69d4", "bridge": "br-int", "label": "tempest-ServersTestJSON-2118357144-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5cce108434ca43799d8b26b6c7f91b2d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7284339c-1e", "ovs_interfaceid": "7284339c-1e96-403f-9c31-171c5b077ec6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 09:42:37 compute-0 nova_compute[189491]: 2025-12-01 09:42:37.514 189495 DEBUG oslo_concurrency.lockutils [None req-2f0731c4-fe5c-4999-81fd-54e75f32dfb4 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] Releasing lock "refresh_cache-cd1ac331-c146-4eb5-bc53-42a82dd3467b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 09:42:37 compute-0 nova_compute[189491]: 2025-12-01 09:42:37.515 189495 DEBUG nova.compute.manager [None req-2f0731c4-fe5c-4999-81fd-54e75f32dfb4 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] [instance: cd1ac331-c146-4eb5-bc53-42a82dd3467b] Instance network_info: |[{"id": "7284339c-1e96-403f-9c31-171c5b077ec6", "address": "fa:16:3e:41:37:2d", "network": {"id": "fb3f5f49-3533-4792-93e2-e7e3702e69d4", "bridge": "br-int", "label": "tempest-ServersTestJSON-2118357144-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5cce108434ca43799d8b26b6c7f91b2d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7284339c-1e", "ovs_interfaceid": "7284339c-1e96-403f-9c31-171c5b077ec6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  1 09:42:37 compute-0 nova_compute[189491]: 2025-12-01 09:42:37.516 189495 DEBUG oslo_concurrency.lockutils [req-53ffca92-0924-4cc0-b2e8-40c1d603a909 req-f9934bda-4667-4582-9f65-e1a86748103d ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Acquired lock "refresh_cache-cd1ac331-c146-4eb5-bc53-42a82dd3467b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 09:42:37 compute-0 nova_compute[189491]: 2025-12-01 09:42:37.516 189495 DEBUG nova.network.neutron [req-53ffca92-0924-4cc0-b2e8-40c1d603a909 req-f9934bda-4667-4582-9f65-e1a86748103d ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: cd1ac331-c146-4eb5-bc53-42a82dd3467b] Refreshing network info cache for port 7284339c-1e96-403f-9c31-171c5b077ec6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  1 09:42:37 compute-0 nova_compute[189491]: 2025-12-01 09:42:37.520 189495 DEBUG nova.virt.libvirt.driver [None req-2f0731c4-fe5c-4999-81fd-54e75f32dfb4 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] [instance: cd1ac331-c146-4eb5-bc53-42a82dd3467b] Start _get_guest_xml network_info=[{"id": "7284339c-1e96-403f-9c31-171c5b077ec6", "address": "fa:16:3e:41:37:2d", "network": {"id": "fb3f5f49-3533-4792-93e2-e7e3702e69d4", "bridge": "br-int", "label": "tempest-ServersTestJSON-2118357144-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5cce108434ca43799d8b26b6c7f91b2d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7284339c-1e", "ovs_interfaceid": "7284339c-1e96-403f-9c31-171c5b077ec6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-01T09:41:33Z,direct_url=<?>,disk_format='qcow2',id=7ddeffd1-d06f-4a46-9e41-114974daa90e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='fac95b8a995a4174bfa966a8d9d9aa01',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-01T09:41:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'guest_format': None, 'encryption_options': None, 'device_type': 'disk', 'encryption_secret_uuid': None, 'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'image_id': '7ddeffd1-d06f-4a46-9e41-114974daa90e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  1 09:42:37 compute-0 nova_compute[189491]: 2025-12-01 09:42:37.522 189495 DEBUG nova.virt.libvirt.driver [None req-c8442ce9-a144-480d-8614-19a73188c31c 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  1 09:42:37 compute-0 nova_compute[189491]: 2025-12-01 09:42:37.523 189495 DEBUG nova.virt.libvirt.driver [None req-c8442ce9-a144-480d-8614-19a73188c31c 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  1 09:42:37 compute-0 nova_compute[189491]: 2025-12-01 09:42:37.524 189495 DEBUG nova.virt.libvirt.driver [None req-c8442ce9-a144-480d-8614-19a73188c31c 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] No VIF found with MAC fa:16:3e:ac:0b:ad, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  1 09:42:37 compute-0 nova_compute[189491]: 2025-12-01 09:42:37.525 189495 INFO nova.virt.libvirt.driver [None req-c8442ce9-a144-480d-8614-19a73188c31c 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] [instance: 38643437-7822-4834-8301-02d3402cad15] Using config drive#033[00m
Dec  1 09:42:37 compute-0 nova_compute[189491]: 2025-12-01 09:42:37.533 189495 WARNING nova.virt.libvirt.driver [None req-2f0731c4-fe5c-4999-81fd-54e75f32dfb4 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 09:42:37 compute-0 nova_compute[189491]: 2025-12-01 09:42:37.542 189495 DEBUG nova.virt.libvirt.host [None req-2f0731c4-fe5c-4999-81fd-54e75f32dfb4 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  1 09:42:37 compute-0 nova_compute[189491]: 2025-12-01 09:42:37.543 189495 DEBUG nova.virt.libvirt.host [None req-2f0731c4-fe5c-4999-81fd-54e75f32dfb4 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  1 09:42:37 compute-0 nova_compute[189491]: 2025-12-01 09:42:37.562 189495 DEBUG nova.virt.libvirt.host [None req-2f0731c4-fe5c-4999-81fd-54e75f32dfb4 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  1 09:42:37 compute-0 nova_compute[189491]: 2025-12-01 09:42:37.563 189495 DEBUG nova.virt.libvirt.host [None req-2f0731c4-fe5c-4999-81fd-54e75f32dfb4 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  1 09:42:37 compute-0 nova_compute[189491]: 2025-12-01 09:42:37.564 189495 DEBUG nova.virt.libvirt.driver [None req-2f0731c4-fe5c-4999-81fd-54e75f32dfb4 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  1 09:42:37 compute-0 nova_compute[189491]: 2025-12-01 09:42:37.565 189495 DEBUG nova.virt.hardware [None req-2f0731c4-fe5c-4999-81fd-54e75f32dfb4 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-01T09:41:32Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='422f041c-a187-4aa2-8167-37f3eb0e89c2',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-01T09:41:33Z,direct_url=<?>,disk_format='qcow2',id=7ddeffd1-d06f-4a46-9e41-114974daa90e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='fac95b8a995a4174bfa966a8d9d9aa01',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-01T09:41:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  1 09:42:37 compute-0 nova_compute[189491]: 2025-12-01 09:42:37.566 189495 DEBUG nova.virt.hardware [None req-2f0731c4-fe5c-4999-81fd-54e75f32dfb4 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  1 09:42:37 compute-0 nova_compute[189491]: 2025-12-01 09:42:37.566 189495 DEBUG nova.virt.hardware [None req-2f0731c4-fe5c-4999-81fd-54e75f32dfb4 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  1 09:42:37 compute-0 nova_compute[189491]: 2025-12-01 09:42:37.567 189495 DEBUG nova.virt.hardware [None req-2f0731c4-fe5c-4999-81fd-54e75f32dfb4 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  1 09:42:37 compute-0 nova_compute[189491]: 2025-12-01 09:42:37.567 189495 DEBUG nova.virt.hardware [None req-2f0731c4-fe5c-4999-81fd-54e75f32dfb4 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  1 09:42:37 compute-0 nova_compute[189491]: 2025-12-01 09:42:37.567 189495 DEBUG nova.virt.hardware [None req-2f0731c4-fe5c-4999-81fd-54e75f32dfb4 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  1 09:42:37 compute-0 nova_compute[189491]: 2025-12-01 09:42:37.568 189495 DEBUG nova.virt.hardware [None req-2f0731c4-fe5c-4999-81fd-54e75f32dfb4 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  1 09:42:37 compute-0 nova_compute[189491]: 2025-12-01 09:42:37.568 189495 DEBUG nova.virt.hardware [None req-2f0731c4-fe5c-4999-81fd-54e75f32dfb4 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  1 09:42:37 compute-0 nova_compute[189491]: 2025-12-01 09:42:37.569 189495 DEBUG nova.virt.hardware [None req-2f0731c4-fe5c-4999-81fd-54e75f32dfb4 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  1 09:42:37 compute-0 nova_compute[189491]: 2025-12-01 09:42:37.569 189495 DEBUG nova.virt.hardware [None req-2f0731c4-fe5c-4999-81fd-54e75f32dfb4 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  1 09:42:37 compute-0 nova_compute[189491]: 2025-12-01 09:42:37.570 189495 DEBUG nova.virt.hardware [None req-2f0731c4-fe5c-4999-81fd-54e75f32dfb4 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  1 09:42:37 compute-0 nova_compute[189491]: 2025-12-01 09:42:37.574 189495 DEBUG nova.virt.libvirt.vif [None req-2f0731c4-fe5c-4999-81fd-54e75f32dfb4 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-01T09:42:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-888137098',display_name='tempest-ServersTestJSON-server-888137098',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-888137098',id=7,image_ref='7ddeffd1-d06f-4a46-9e41-114974daa90e',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKxy0LvgSEGh7zwBaw2434lp/oxovhilb1JCymft6bK4mzd1ISmXgDkMZaxgg7D6dffcw3GZtknDnbfCvAemwILwMQYmsaVEKt/CvOpSZ7xtNdZ7yRy8gVklm9AuFP94jQ==',key_name='tempest-keypair-1791515073',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5cce108434ca43799d8b26b6c7f91b2d',ramdisk_id='',reservation_id='r-6sspdifp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7ddeffd1-d06f-4a46-9e41-114974daa90e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1829161533',owner_user_name='tempest-ServersTestJSON-1829161533-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-01T09:42:29Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f4e22b2cefdd467b833f8e2b663a0b75',uuid=cd1ac331-c146-4eb5-bc53-42a82dd3467b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7284339c-1e96-403f-9c31-171c5b077ec6", "address": "fa:16:3e:41:37:2d", "network": {"id": "fb3f5f49-3533-4792-93e2-e7e3702e69d4", "bridge": "br-int", "label": "tempest-ServersTestJSON-2118357144-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5cce108434ca43799d8b26b6c7f91b2d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7284339c-1e", "ovs_interfaceid": "7284339c-1e96-403f-9c31-171c5b077ec6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  1 09:42:37 compute-0 nova_compute[189491]: 2025-12-01 09:42:37.575 189495 DEBUG nova.network.os_vif_util [None req-2f0731c4-fe5c-4999-81fd-54e75f32dfb4 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] Converting VIF {"id": "7284339c-1e96-403f-9c31-171c5b077ec6", "address": "fa:16:3e:41:37:2d", "network": {"id": "fb3f5f49-3533-4792-93e2-e7e3702e69d4", "bridge": "br-int", "label": "tempest-ServersTestJSON-2118357144-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5cce108434ca43799d8b26b6c7f91b2d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7284339c-1e", "ovs_interfaceid": "7284339c-1e96-403f-9c31-171c5b077ec6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 09:42:37 compute-0 nova_compute[189491]: 2025-12-01 09:42:37.576 189495 DEBUG nova.network.os_vif_util [None req-2f0731c4-fe5c-4999-81fd-54e75f32dfb4 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:41:37:2d,bridge_name='br-int',has_traffic_filtering=True,id=7284339c-1e96-403f-9c31-171c5b077ec6,network=Network(fb3f5f49-3533-4792-93e2-e7e3702e69d4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7284339c-1e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 09:42:37 compute-0 nova_compute[189491]: 2025-12-01 09:42:37.576 189495 DEBUG nova.objects.instance [None req-2f0731c4-fe5c-4999-81fd-54e75f32dfb4 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] Lazy-loading 'pci_devices' on Instance uuid cd1ac331-c146-4eb5-bc53-42a82dd3467b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 09:42:37 compute-0 nova_compute[189491]: 2025-12-01 09:42:37.591 189495 DEBUG nova.virt.libvirt.driver [None req-2f0731c4-fe5c-4999-81fd-54e75f32dfb4 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] [instance: cd1ac331-c146-4eb5-bc53-42a82dd3467b] End _get_guest_xml xml=<domain type="kvm">
Dec  1 09:42:37 compute-0 nova_compute[189491]:  <uuid>cd1ac331-c146-4eb5-bc53-42a82dd3467b</uuid>
Dec  1 09:42:37 compute-0 nova_compute[189491]:  <name>instance-00000007</name>
Dec  1 09:42:37 compute-0 nova_compute[189491]:  <memory>131072</memory>
Dec  1 09:42:37 compute-0 nova_compute[189491]:  <vcpu>1</vcpu>
Dec  1 09:42:37 compute-0 nova_compute[189491]:  <metadata>
Dec  1 09:42:37 compute-0 nova_compute[189491]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  1 09:42:37 compute-0 nova_compute[189491]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  1 09:42:37 compute-0 nova_compute[189491]:      <nova:name>tempest-ServersTestJSON-server-888137098</nova:name>
Dec  1 09:42:37 compute-0 nova_compute[189491]:      <nova:creationTime>2025-12-01 09:42:37</nova:creationTime>
Dec  1 09:42:37 compute-0 nova_compute[189491]:      <nova:flavor name="m1.nano">
Dec  1 09:42:37 compute-0 nova_compute[189491]:        <nova:memory>128</nova:memory>
Dec  1 09:42:37 compute-0 nova_compute[189491]:        <nova:disk>1</nova:disk>
Dec  1 09:42:37 compute-0 nova_compute[189491]:        <nova:swap>0</nova:swap>
Dec  1 09:42:37 compute-0 nova_compute[189491]:        <nova:ephemeral>0</nova:ephemeral>
Dec  1 09:42:37 compute-0 nova_compute[189491]:        <nova:vcpus>1</nova:vcpus>
Dec  1 09:42:37 compute-0 nova_compute[189491]:      </nova:flavor>
Dec  1 09:42:37 compute-0 nova_compute[189491]:      <nova:owner>
Dec  1 09:42:37 compute-0 nova_compute[189491]:        <nova:user uuid="f4e22b2cefdd467b833f8e2b663a0b75">tempest-ServersTestJSON-1829161533-project-member</nova:user>
Dec  1 09:42:37 compute-0 nova_compute[189491]:        <nova:project uuid="5cce108434ca43799d8b26b6c7f91b2d">tempest-ServersTestJSON-1829161533</nova:project>
Dec  1 09:42:37 compute-0 nova_compute[189491]:      </nova:owner>
Dec  1 09:42:37 compute-0 nova_compute[189491]:      <nova:root type="image" uuid="7ddeffd1-d06f-4a46-9e41-114974daa90e"/>
Dec  1 09:42:37 compute-0 nova_compute[189491]:      <nova:ports>
Dec  1 09:42:37 compute-0 nova_compute[189491]:        <nova:port uuid="7284339c-1e96-403f-9c31-171c5b077ec6">
Dec  1 09:42:37 compute-0 nova_compute[189491]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Dec  1 09:42:37 compute-0 nova_compute[189491]:        </nova:port>
Dec  1 09:42:37 compute-0 nova_compute[189491]:      </nova:ports>
Dec  1 09:42:37 compute-0 nova_compute[189491]:    </nova:instance>
Dec  1 09:42:37 compute-0 nova_compute[189491]:  </metadata>
Dec  1 09:42:37 compute-0 nova_compute[189491]:  <sysinfo type="smbios">
Dec  1 09:42:37 compute-0 nova_compute[189491]:    <system>
Dec  1 09:42:37 compute-0 nova_compute[189491]:      <entry name="manufacturer">RDO</entry>
Dec  1 09:42:37 compute-0 nova_compute[189491]:      <entry name="product">OpenStack Compute</entry>
Dec  1 09:42:37 compute-0 nova_compute[189491]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  1 09:42:37 compute-0 nova_compute[189491]:      <entry name="serial">cd1ac331-c146-4eb5-bc53-42a82dd3467b</entry>
Dec  1 09:42:37 compute-0 nova_compute[189491]:      <entry name="uuid">cd1ac331-c146-4eb5-bc53-42a82dd3467b</entry>
Dec  1 09:42:37 compute-0 nova_compute[189491]:      <entry name="family">Virtual Machine</entry>
Dec  1 09:42:37 compute-0 nova_compute[189491]:    </system>
Dec  1 09:42:37 compute-0 nova_compute[189491]:  </sysinfo>
Dec  1 09:42:37 compute-0 nova_compute[189491]:  <os>
Dec  1 09:42:37 compute-0 nova_compute[189491]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  1 09:42:37 compute-0 nova_compute[189491]:    <boot dev="hd"/>
Dec  1 09:42:37 compute-0 nova_compute[189491]:    <smbios mode="sysinfo"/>
Dec  1 09:42:37 compute-0 nova_compute[189491]:  </os>
Dec  1 09:42:37 compute-0 nova_compute[189491]:  <features>
Dec  1 09:42:37 compute-0 nova_compute[189491]:    <acpi/>
Dec  1 09:42:37 compute-0 nova_compute[189491]:    <apic/>
Dec  1 09:42:37 compute-0 nova_compute[189491]:    <vmcoreinfo/>
Dec  1 09:42:37 compute-0 nova_compute[189491]:  </features>
Dec  1 09:42:37 compute-0 nova_compute[189491]:  <clock offset="utc">
Dec  1 09:42:37 compute-0 nova_compute[189491]:    <timer name="pit" tickpolicy="delay"/>
Dec  1 09:42:37 compute-0 nova_compute[189491]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  1 09:42:37 compute-0 nova_compute[189491]:    <timer name="hpet" present="no"/>
Dec  1 09:42:37 compute-0 nova_compute[189491]:  </clock>
Dec  1 09:42:37 compute-0 nova_compute[189491]:  <cpu mode="host-model" match="exact">
Dec  1 09:42:37 compute-0 nova_compute[189491]:    <topology sockets="1" cores="1" threads="1"/>
Dec  1 09:42:37 compute-0 nova_compute[189491]:  </cpu>
Dec  1 09:42:37 compute-0 nova_compute[189491]:  <devices>
Dec  1 09:42:37 compute-0 nova_compute[189491]:    <disk type="file" device="disk">
Dec  1 09:42:37 compute-0 nova_compute[189491]:      <driver name="qemu" type="qcow2" cache="none"/>
Dec  1 09:42:37 compute-0 nova_compute[189491]:      <source file="/var/lib/nova/instances/cd1ac331-c146-4eb5-bc53-42a82dd3467b/disk"/>
Dec  1 09:42:37 compute-0 nova_compute[189491]:      <target dev="vda" bus="virtio"/>
Dec  1 09:42:37 compute-0 nova_compute[189491]:    </disk>
Dec  1 09:42:37 compute-0 nova_compute[189491]:    <disk type="file" device="cdrom">
Dec  1 09:42:37 compute-0 nova_compute[189491]:      <driver name="qemu" type="raw" cache="none"/>
Dec  1 09:42:37 compute-0 nova_compute[189491]:      <source file="/var/lib/nova/instances/cd1ac331-c146-4eb5-bc53-42a82dd3467b/disk.config"/>
Dec  1 09:42:37 compute-0 nova_compute[189491]:      <target dev="sda" bus="sata"/>
Dec  1 09:42:37 compute-0 nova_compute[189491]:    </disk>
Dec  1 09:42:37 compute-0 nova_compute[189491]:    <interface type="ethernet">
Dec  1 09:42:37 compute-0 nova_compute[189491]:      <mac address="fa:16:3e:41:37:2d"/>
Dec  1 09:42:37 compute-0 nova_compute[189491]:      <model type="virtio"/>
Dec  1 09:42:37 compute-0 nova_compute[189491]:      <driver name="vhost" rx_queue_size="512"/>
Dec  1 09:42:37 compute-0 nova_compute[189491]:      <mtu size="1442"/>
Dec  1 09:42:37 compute-0 nova_compute[189491]:      <target dev="tap7284339c-1e"/>
Dec  1 09:42:37 compute-0 nova_compute[189491]:    </interface>
Dec  1 09:42:37 compute-0 nova_compute[189491]:    <serial type="pty">
Dec  1 09:42:37 compute-0 nova_compute[189491]:      <log file="/var/lib/nova/instances/cd1ac331-c146-4eb5-bc53-42a82dd3467b/console.log" append="off"/>
Dec  1 09:42:37 compute-0 nova_compute[189491]:    </serial>
Dec  1 09:42:37 compute-0 nova_compute[189491]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  1 09:42:37 compute-0 nova_compute[189491]:    <video>
Dec  1 09:42:37 compute-0 nova_compute[189491]:      <model type="virtio"/>
Dec  1 09:42:37 compute-0 nova_compute[189491]:    </video>
Dec  1 09:42:37 compute-0 nova_compute[189491]:    <input type="tablet" bus="usb"/>
Dec  1 09:42:37 compute-0 nova_compute[189491]:    <rng model="virtio">
Dec  1 09:42:37 compute-0 nova_compute[189491]:      <backend model="random">/dev/urandom</backend>
Dec  1 09:42:37 compute-0 nova_compute[189491]:    </rng>
Dec  1 09:42:37 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root"/>
Dec  1 09:42:37 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:42:37 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:42:37 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:42:37 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:42:37 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:42:37 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:42:37 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:42:37 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:42:37 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:42:37 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:42:37 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:42:37 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:42:37 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:42:37 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:42:37 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:42:37 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:42:37 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:42:37 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:42:37 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:42:37 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:42:37 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:42:37 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:42:37 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:42:37 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:42:37 compute-0 nova_compute[189491]:    <controller type="usb" index="0"/>
Dec  1 09:42:37 compute-0 nova_compute[189491]:    <memballoon model="virtio">
Dec  1 09:42:37 compute-0 nova_compute[189491]:      <stats period="10"/>
Dec  1 09:42:37 compute-0 nova_compute[189491]:    </memballoon>
Dec  1 09:42:37 compute-0 nova_compute[189491]:  </devices>
Dec  1 09:42:37 compute-0 nova_compute[189491]: </domain>
Dec  1 09:42:37 compute-0 nova_compute[189491]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  1 09:42:37 compute-0 nova_compute[189491]: 2025-12-01 09:42:37.593 189495 DEBUG nova.compute.manager [None req-2f0731c4-fe5c-4999-81fd-54e75f32dfb4 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] [instance: cd1ac331-c146-4eb5-bc53-42a82dd3467b] Preparing to wait for external event network-vif-plugged-7284339c-1e96-403f-9c31-171c5b077ec6 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  1 09:42:37 compute-0 nova_compute[189491]: 2025-12-01 09:42:37.594 189495 DEBUG oslo_concurrency.lockutils [None req-2f0731c4-fe5c-4999-81fd-54e75f32dfb4 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] Acquiring lock "cd1ac331-c146-4eb5-bc53-42a82dd3467b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:42:37 compute-0 nova_compute[189491]: 2025-12-01 09:42:37.594 189495 DEBUG oslo_concurrency.lockutils [None req-2f0731c4-fe5c-4999-81fd-54e75f32dfb4 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] Lock "cd1ac331-c146-4eb5-bc53-42a82dd3467b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:42:37 compute-0 nova_compute[189491]: 2025-12-01 09:42:37.595 189495 DEBUG oslo_concurrency.lockutils [None req-2f0731c4-fe5c-4999-81fd-54e75f32dfb4 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] Lock "cd1ac331-c146-4eb5-bc53-42a82dd3467b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:42:37 compute-0 nova_compute[189491]: 2025-12-01 09:42:37.596 189495 DEBUG nova.virt.libvirt.vif [None req-2f0731c4-fe5c-4999-81fd-54e75f32dfb4 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-01T09:42:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-888137098',display_name='tempest-ServersTestJSON-server-888137098',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-888137098',id=7,image_ref='7ddeffd1-d06f-4a46-9e41-114974daa90e',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKxy0LvgSEGh7zwBaw2434lp/oxovhilb1JCymft6bK4mzd1ISmXgDkMZaxgg7D6dffcw3GZtknDnbfCvAemwILwMQYmsaVEKt/CvOpSZ7xtNdZ7yRy8gVklm9AuFP94jQ==',key_name='tempest-keypair-1791515073',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5cce108434ca43799d8b26b6c7f91b2d',ramdisk_id='',reservation_id='r-6sspdifp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7ddeffd1-d06f-4a46-9e41-114974daa90e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1829161533',owner_user_name='tempest-ServersTestJSON-1829161533-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-01T09:42:29Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f4e22b2cefdd467b833f8e2b663a0b75',uuid=cd1ac331-c146-4eb5-bc53-42a82dd3467b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7284339c-1e96-403f-9c31-171c5b077ec6", "address": "fa:16:3e:41:37:2d", "network": {"id": "fb3f5f49-3533-4792-93e2-e7e3702e69d4", "bridge": "br-int", "label": "tempest-ServersTestJSON-2118357144-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5cce108434ca43799d8b26b6c7f91b2d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7284339c-1e", "ovs_interfaceid": "7284339c-1e96-403f-9c31-171c5b077ec6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  1 09:42:37 compute-0 nova_compute[189491]: 2025-12-01 09:42:37.597 189495 DEBUG nova.network.os_vif_util [None req-2f0731c4-fe5c-4999-81fd-54e75f32dfb4 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] Converting VIF {"id": "7284339c-1e96-403f-9c31-171c5b077ec6", "address": "fa:16:3e:41:37:2d", "network": {"id": "fb3f5f49-3533-4792-93e2-e7e3702e69d4", "bridge": "br-int", "label": "tempest-ServersTestJSON-2118357144-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5cce108434ca43799d8b26b6c7f91b2d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7284339c-1e", "ovs_interfaceid": "7284339c-1e96-403f-9c31-171c5b077ec6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 09:42:37 compute-0 nova_compute[189491]: 2025-12-01 09:42:37.598 189495 DEBUG nova.network.os_vif_util [None req-2f0731c4-fe5c-4999-81fd-54e75f32dfb4 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:41:37:2d,bridge_name='br-int',has_traffic_filtering=True,id=7284339c-1e96-403f-9c31-171c5b077ec6,network=Network(fb3f5f49-3533-4792-93e2-e7e3702e69d4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7284339c-1e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 09:42:37 compute-0 nova_compute[189491]: 2025-12-01 09:42:37.599 189495 DEBUG os_vif [None req-2f0731c4-fe5c-4999-81fd-54e75f32dfb4 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:41:37:2d,bridge_name='br-int',has_traffic_filtering=True,id=7284339c-1e96-403f-9c31-171c5b077ec6,network=Network(fb3f5f49-3533-4792-93e2-e7e3702e69d4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7284339c-1e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  1 09:42:37 compute-0 nova_compute[189491]: 2025-12-01 09:42:37.600 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:42:37 compute-0 nova_compute[189491]: 2025-12-01 09:42:37.601 189495 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:42:37 compute-0 nova_compute[189491]: 2025-12-01 09:42:37.601 189495 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 09:42:37 compute-0 nova_compute[189491]: 2025-12-01 09:42:37.613 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:42:37 compute-0 nova_compute[189491]: 2025-12-01 09:42:37.614 189495 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7284339c-1e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:42:37 compute-0 nova_compute[189491]: 2025-12-01 09:42:37.618 189495 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7284339c-1e, col_values=(('external_ids', {'iface-id': '7284339c-1e96-403f-9c31-171c5b077ec6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:41:37:2d', 'vm-uuid': 'cd1ac331-c146-4eb5-bc53-42a82dd3467b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:42:37 compute-0 NetworkManager[56318]: <info>  [1764582157.6230] manager: (tap7284339c-1e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/34)
Dec  1 09:42:37 compute-0 nova_compute[189491]: 2025-12-01 09:42:37.621 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:42:37 compute-0 nova_compute[189491]: 2025-12-01 09:42:37.626 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  1 09:42:37 compute-0 nova_compute[189491]: 2025-12-01 09:42:37.635 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:42:37 compute-0 nova_compute[189491]: 2025-12-01 09:42:37.637 189495 INFO os_vif [None req-2f0731c4-fe5c-4999-81fd-54e75f32dfb4 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:41:37:2d,bridge_name='br-int',has_traffic_filtering=True,id=7284339c-1e96-403f-9c31-171c5b077ec6,network=Network(fb3f5f49-3533-4792-93e2-e7e3702e69d4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7284339c-1e')#033[00m
Dec  1 09:42:37 compute-0 nova_compute[189491]: 2025-12-01 09:42:37.703 189495 DEBUG nova.virt.libvirt.driver [None req-2f0731c4-fe5c-4999-81fd-54e75f32dfb4 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  1 09:42:37 compute-0 nova_compute[189491]: 2025-12-01 09:42:37.704 189495 DEBUG nova.virt.libvirt.driver [None req-2f0731c4-fe5c-4999-81fd-54e75f32dfb4 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  1 09:42:37 compute-0 nova_compute[189491]: 2025-12-01 09:42:37.705 189495 DEBUG nova.virt.libvirt.driver [None req-2f0731c4-fe5c-4999-81fd-54e75f32dfb4 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] No VIF found with MAC fa:16:3e:41:37:2d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  1 09:42:37 compute-0 nova_compute[189491]: 2025-12-01 09:42:37.706 189495 INFO nova.virt.libvirt.driver [None req-2f0731c4-fe5c-4999-81fd-54e75f32dfb4 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] [instance: cd1ac331-c146-4eb5-bc53-42a82dd3467b] Using config drive#033[00m
Dec  1 09:42:37 compute-0 nova_compute[189491]: 2025-12-01 09:42:37.730 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:42:37 compute-0 nova_compute[189491]: 2025-12-01 09:42:37.731 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 09:42:37 compute-0 nova_compute[189491]: 2025-12-01 09:42:37.731 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 09:42:37 compute-0 nova_compute[189491]: 2025-12-01 09:42:37.749 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] [instance: 38643437-7822-4834-8301-02d3402cad15] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Dec  1 09:42:37 compute-0 nova_compute[189491]: 2025-12-01 09:42:37.749 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] [instance: cd1ac331-c146-4eb5-bc53-42a82dd3467b] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Dec  1 09:42:37 compute-0 nova_compute[189491]: 2025-12-01 09:42:37.750 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  1 09:42:38 compute-0 nova_compute[189491]: 2025-12-01 09:42:38.071 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:42:38 compute-0 nova_compute[189491]: 2025-12-01 09:42:38.236 189495 INFO nova.virt.libvirt.driver [None req-c8442ce9-a144-480d-8614-19a73188c31c 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] [instance: 38643437-7822-4834-8301-02d3402cad15] Creating config drive at /var/lib/nova/instances/38643437-7822-4834-8301-02d3402cad15/disk.config#033[00m
Dec  1 09:42:38 compute-0 nova_compute[189491]: 2025-12-01 09:42:38.242 189495 DEBUG oslo_concurrency.processutils [None req-c8442ce9-a144-480d-8614-19a73188c31c 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/38643437-7822-4834-8301-02d3402cad15/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpg7fuet7v execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:42:38 compute-0 nova_compute[189491]: 2025-12-01 09:42:38.366 189495 INFO nova.virt.libvirt.driver [None req-2f0731c4-fe5c-4999-81fd-54e75f32dfb4 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] [instance: cd1ac331-c146-4eb5-bc53-42a82dd3467b] Creating config drive at /var/lib/nova/instances/cd1ac331-c146-4eb5-bc53-42a82dd3467b/disk.config#033[00m
Dec  1 09:42:38 compute-0 nova_compute[189491]: 2025-12-01 09:42:38.373 189495 DEBUG oslo_concurrency.processutils [None req-2f0731c4-fe5c-4999-81fd-54e75f32dfb4 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/cd1ac331-c146-4eb5-bc53-42a82dd3467b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpg9kwnkck execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:42:38 compute-0 nova_compute[189491]: 2025-12-01 09:42:38.391 189495 DEBUG oslo_concurrency.processutils [None req-c8442ce9-a144-480d-8614-19a73188c31c 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/38643437-7822-4834-8301-02d3402cad15/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpg7fuet7v" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:42:38 compute-0 NetworkManager[56318]: <info>  [1764582158.5038] manager: (tap7d0f49f6-e0): new Tun device (/org/freedesktop/NetworkManager/Devices/35)
Dec  1 09:42:38 compute-0 kernel: tap7d0f49f6-e0: entered promiscuous mode
Dec  1 09:42:38 compute-0 ovn_controller[97794]: 2025-12-01T09:42:38Z|00066|binding|INFO|Claiming lport 7d0f49f6-e0e1-44b1-be36-fa4df3220ddb for this chassis.
Dec  1 09:42:38 compute-0 nova_compute[189491]: 2025-12-01 09:42:38.504 189495 DEBUG oslo_concurrency.processutils [None req-2f0731c4-fe5c-4999-81fd-54e75f32dfb4 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/cd1ac331-c146-4eb5-bc53-42a82dd3467b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpg9kwnkck" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:42:38 compute-0 nova_compute[189491]: 2025-12-01 09:42:38.507 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:42:38 compute-0 ovn_controller[97794]: 2025-12-01T09:42:38Z|00067|binding|INFO|7d0f49f6-e0e1-44b1-be36-fa4df3220ddb: Claiming fa:16:3e:ac:0b:ad 10.100.0.9
Dec  1 09:42:38 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:42:38.518 106659 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ac:0b:ad 10.100.0.9'], port_security=['fa:16:3e:ac:0b:ad 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '38643437-7822-4834-8301-02d3402cad15', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8f64018c-1aea-4b7d-b0f4-4ea18afaaeb0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd7764856ebb94acbaa0b40cbbf09cb3d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '956cfa36-e252-4c20-b19a-437aef36f7e1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=63038633-add0-4830-ba46-d2e62ec7d35b, chassis=[<ovs.db.idl.Row object at 0x7ff7f12c3670>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff7f12c3670>], logical_port=7d0f49f6-e0e1-44b1-be36-fa4df3220ddb) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 09:42:38 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:42:38.519 106659 INFO neutron.agent.ovn.metadata.agent [-] Port 7d0f49f6-e0e1-44b1-be36-fa4df3220ddb in datapath 8f64018c-1aea-4b7d-b0f4-4ea18afaaeb0 bound to our chassis#033[00m
Dec  1 09:42:38 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:42:38.521 106659 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8f64018c-1aea-4b7d-b0f4-4ea18afaaeb0#033[00m
Dec  1 09:42:38 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:42:38.535 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[f4b25925-7cf9-4078-8585-5d90cfd4e2ac]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:42:38 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:42:38.536 106659 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8f64018c-11 in ovnmeta-8f64018c-1aea-4b7d-b0f4-4ea18afaaeb0 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  1 09:42:38 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:42:38.538 239818 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8f64018c-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  1 09:42:38 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:42:38.539 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[23a5d156-46c0-4307-a003-34852be176b7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:42:38 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:42:38.540 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[7927875b-9390-4166-beb5-b24cdbd14126]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:42:38 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:42:38.551 106797 DEBUG oslo.privsep.daemon [-] privsep: reply[c677ef79-32c0-497f-b0b3-14ddbddc7cba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:42:38 compute-0 systemd-udevd[251841]: Network interface NamePolicy= disabled on kernel command line.
Dec  1 09:42:38 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:42:38.580 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[a913c746-4139-4f14-969d-976721c304ed]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:42:38 compute-0 nova_compute[189491]: 2025-12-01 09:42:38.580 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:42:38 compute-0 NetworkManager[56318]: <info>  [1764582158.5891] device (tap7d0f49f6-e0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  1 09:42:38 compute-0 NetworkManager[56318]: <info>  [1764582158.5899] device (tap7d0f49f6-e0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  1 09:42:38 compute-0 NetworkManager[56318]: <info>  [1764582158.6020] manager: (tap7284339c-1e): new Tun device (/org/freedesktop/NetworkManager/Devices/36)
Dec  1 09:42:38 compute-0 kernel: tap7284339c-1e: entered promiscuous mode
Dec  1 09:42:38 compute-0 nova_compute[189491]: 2025-12-01 09:42:38.610 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:42:38 compute-0 systemd-machined[155812]: New machine qemu-6-instance-00000006.
Dec  1 09:42:38 compute-0 ovn_controller[97794]: 2025-12-01T09:42:38Z|00068|binding|INFO|Claiming lport 7284339c-1e96-403f-9c31-171c5b077ec6 for this chassis.
Dec  1 09:42:38 compute-0 ovn_controller[97794]: 2025-12-01T09:42:38Z|00069|binding|INFO|7284339c-1e96-403f-9c31-171c5b077ec6: Claiming fa:16:3e:41:37:2d 10.100.0.6
Dec  1 09:42:38 compute-0 podman[251795]: 2025-12-01 09:42:38.613861408 +0000 UTC m=+0.132563661 container health_status 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  1 09:42:38 compute-0 ovn_controller[97794]: 2025-12-01T09:42:38Z|00070|binding|INFO|Setting lport 7d0f49f6-e0e1-44b1-be36-fa4df3220ddb ovn-installed in OVS
Dec  1 09:42:38 compute-0 ovn_controller[97794]: 2025-12-01T09:42:38Z|00071|binding|INFO|Setting lport 7d0f49f6-e0e1-44b1-be36-fa4df3220ddb up in Southbound
Dec  1 09:42:38 compute-0 nova_compute[189491]: 2025-12-01 09:42:38.619 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:42:38 compute-0 systemd[1]: Started Virtual Machine qemu-6-instance-00000006.
Dec  1 09:42:38 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:42:38.623 106659 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:41:37:2d 10.100.0.6'], port_security=['fa:16:3e:41:37:2d 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'cd1ac331-c146-4eb5-bc53-42a82dd3467b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fb3f5f49-3533-4792-93e2-e7e3702e69d4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5cce108434ca43799d8b26b6c7f91b2d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '5e0448be-5087-4376-919f-cd5e74d4cf16', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4999a74f-f7a7-4c0a-83f8-e48679ee8417, chassis=[<ovs.db.idl.Row object at 0x7ff7f12c3670>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff7f12c3670>], logical_port=7284339c-1e96-403f-9c31-171c5b077ec6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 09:42:38 compute-0 NetworkManager[56318]: <info>  [1764582158.6284] device (tap7284339c-1e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  1 09:42:38 compute-0 NetworkManager[56318]: <info>  [1764582158.6294] device (tap7284339c-1e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  1 09:42:38 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:42:38.631 239843 DEBUG oslo.privsep.daemon [-] privsep: reply[4b39760b-ca7b-4403-9bbd-f9bd6041e3a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:42:38 compute-0 systemd-machined[155812]: New machine qemu-7-instance-00000007.
Dec  1 09:42:38 compute-0 NetworkManager[56318]: <info>  [1764582158.6577] manager: (tap8f64018c-10): new Veth device (/org/freedesktop/NetworkManager/Devices/37)
Dec  1 09:42:38 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:42:38.657 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[c2b63822-9f76-49ea-adeb-066056fe2572]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:42:38 compute-0 systemd[1]: Started Virtual Machine qemu-7-instance-00000007.
Dec  1 09:42:38 compute-0 podman[251796]: 2025-12-01 09:42:38.672216161 +0000 UTC m=+0.184183697 container health_status 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2)
Dec  1 09:42:38 compute-0 nova_compute[189491]: 2025-12-01 09:42:38.672 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:42:38 compute-0 ovn_controller[97794]: 2025-12-01T09:42:38Z|00072|binding|INFO|Setting lport 7284339c-1e96-403f-9c31-171c5b077ec6 ovn-installed in OVS
Dec  1 09:42:38 compute-0 ovn_controller[97794]: 2025-12-01T09:42:38Z|00073|binding|INFO|Setting lport 7284339c-1e96-403f-9c31-171c5b077ec6 up in Southbound
Dec  1 09:42:38 compute-0 nova_compute[189491]: 2025-12-01 09:42:38.677 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:42:38 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:42:38.698 239843 DEBUG oslo.privsep.daemon [-] privsep: reply[a399db44-ad51-424f-8142-ced7b82db82c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:42:38 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:42:38.702 239843 DEBUG oslo.privsep.daemon [-] privsep: reply[0a8747ce-8a9d-453d-afc7-a70d47a022a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:42:38 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:42:38.723 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=203a4433-d8f4-4d80-8084-548a6d57cd5d, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:42:38 compute-0 NetworkManager[56318]: <info>  [1764582158.7257] device (tap8f64018c-10): carrier: link connected
Dec  1 09:42:38 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:42:38.730 239843 DEBUG oslo.privsep.daemon [-] privsep: reply[73d1b389-cfd5-4c6f-b11f-c60f985bc901]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:42:38 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:42:38.753 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[f0b713b0-08e8-4312-84ad-4726c262a2a7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8f64018c-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:22:0c:da'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 22], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 539842, 'reachable_time': 39293, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 251897, 'error': None, 'target': 'ovnmeta-8f64018c-1aea-4b7d-b0f4-4ea18afaaeb0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:42:38 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:42:38.772 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[9b4a7ea7-a855-4d71-81e8-38ca95e7be25]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe22:cda'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 539842, 'tstamp': 539842}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 251901, 'error': None, 'target': 'ovnmeta-8f64018c-1aea-4b7d-b0f4-4ea18afaaeb0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:42:38 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:42:38.791 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[636254f9-5a3f-423a-9ddd-a67674e7feaf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8f64018c-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:22:0c:da'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 22], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 539842, 'reachable_time': 39293, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 251903, 'error': None, 'target': 'ovnmeta-8f64018c-1aea-4b7d-b0f4-4ea18afaaeb0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:42:38 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:42:38.828 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[21545db3-1aa4-4321-80f5-4e4fa9758ab0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:42:38 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:42:38.895 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[684beb83-6da3-4bb1-a61d-789c61d6da5c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:42:38 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:42:38.897 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8f64018c-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:42:38 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:42:38.897 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 09:42:38 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:42:38.898 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8f64018c-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:42:38 compute-0 NetworkManager[56318]: <info>  [1764582158.9005] manager: (tap8f64018c-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/38)
Dec  1 09:42:38 compute-0 kernel: tap8f64018c-10: entered promiscuous mode
Dec  1 09:42:38 compute-0 nova_compute[189491]: 2025-12-01 09:42:38.905 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:42:38 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:42:38.908 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8f64018c-10, col_values=(('external_ids', {'iface-id': '043e8190-2d11-42d5-822a-8b7d16589eb2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:42:38 compute-0 ovn_controller[97794]: 2025-12-01T09:42:38Z|00074|binding|INFO|Releasing lport 043e8190-2d11-42d5-822a-8b7d16589eb2 from this chassis (sb_readonly=0)
Dec  1 09:42:38 compute-0 nova_compute[189491]: 2025-12-01 09:42:38.910 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:42:38 compute-0 nova_compute[189491]: 2025-12-01 09:42:38.928 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:42:38 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:42:38.928 106659 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8f64018c-1aea-4b7d-b0f4-4ea18afaaeb0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8f64018c-1aea-4b7d-b0f4-4ea18afaaeb0.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  1 09:42:38 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:42:38.929 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[25f0c54f-c463-49c9-a579-3a0b436255df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:42:38 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:42:38.930 106659 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  1 09:42:38 compute-0 ovn_metadata_agent[106654]: global
Dec  1 09:42:38 compute-0 ovn_metadata_agent[106654]:    log         /dev/log local0 debug
Dec  1 09:42:38 compute-0 ovn_metadata_agent[106654]:    log-tag     haproxy-metadata-proxy-8f64018c-1aea-4b7d-b0f4-4ea18afaaeb0
Dec  1 09:42:38 compute-0 ovn_metadata_agent[106654]:    user        root
Dec  1 09:42:38 compute-0 ovn_metadata_agent[106654]:    group       root
Dec  1 09:42:38 compute-0 ovn_metadata_agent[106654]:    maxconn     1024
Dec  1 09:42:38 compute-0 ovn_metadata_agent[106654]:    pidfile     /var/lib/neutron/external/pids/8f64018c-1aea-4b7d-b0f4-4ea18afaaeb0.pid.haproxy
Dec  1 09:42:38 compute-0 ovn_metadata_agent[106654]:    daemon
Dec  1 09:42:38 compute-0 ovn_metadata_agent[106654]: 
Dec  1 09:42:38 compute-0 ovn_metadata_agent[106654]: defaults
Dec  1 09:42:38 compute-0 ovn_metadata_agent[106654]:    log global
Dec  1 09:42:38 compute-0 ovn_metadata_agent[106654]:    mode http
Dec  1 09:42:38 compute-0 ovn_metadata_agent[106654]:    option httplog
Dec  1 09:42:38 compute-0 ovn_metadata_agent[106654]:    option dontlognull
Dec  1 09:42:38 compute-0 ovn_metadata_agent[106654]:    option http-server-close
Dec  1 09:42:38 compute-0 ovn_metadata_agent[106654]:    option forwardfor
Dec  1 09:42:38 compute-0 ovn_metadata_agent[106654]:    retries                 3
Dec  1 09:42:38 compute-0 ovn_metadata_agent[106654]:    timeout http-request    30s
Dec  1 09:42:38 compute-0 ovn_metadata_agent[106654]:    timeout connect         30s
Dec  1 09:42:38 compute-0 ovn_metadata_agent[106654]:    timeout client          32s
Dec  1 09:42:38 compute-0 ovn_metadata_agent[106654]:    timeout server          32s
Dec  1 09:42:38 compute-0 ovn_metadata_agent[106654]:    timeout http-keep-alive 30s
Dec  1 09:42:38 compute-0 ovn_metadata_agent[106654]: 
Dec  1 09:42:38 compute-0 ovn_metadata_agent[106654]: 
Dec  1 09:42:38 compute-0 ovn_metadata_agent[106654]: listen listener
Dec  1 09:42:38 compute-0 ovn_metadata_agent[106654]:    bind 169.254.169.254:80
Dec  1 09:42:38 compute-0 ovn_metadata_agent[106654]:    server metadata /var/lib/neutron/metadata_proxy
Dec  1 09:42:38 compute-0 ovn_metadata_agent[106654]:    http-request add-header X-OVN-Network-ID 8f64018c-1aea-4b7d-b0f4-4ea18afaaeb0
Dec  1 09:42:38 compute-0 ovn_metadata_agent[106654]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  1 09:42:38 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:42:38.931 106659 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8f64018c-1aea-4b7d-b0f4-4ea18afaaeb0', 'env', 'PROCESS_TAG=haproxy-8f64018c-1aea-4b7d-b0f4-4ea18afaaeb0', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8f64018c-1aea-4b7d-b0f4-4ea18afaaeb0.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  1 09:42:39 compute-0 nova_compute[189491]: 2025-12-01 09:42:39.076 189495 DEBUG nova.virt.driver [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] Emitting event <LifecycleEvent: 1764582159.076025, cd1ac331-c146-4eb5-bc53-42a82dd3467b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 09:42:39 compute-0 nova_compute[189491]: 2025-12-01 09:42:39.077 189495 INFO nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: cd1ac331-c146-4eb5-bc53-42a82dd3467b] VM Started (Lifecycle Event)#033[00m
Dec  1 09:42:39 compute-0 nova_compute[189491]: 2025-12-01 09:42:39.104 189495 DEBUG nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: cd1ac331-c146-4eb5-bc53-42a82dd3467b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 09:42:39 compute-0 nova_compute[189491]: 2025-12-01 09:42:39.110 189495 DEBUG nova.virt.driver [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] Emitting event <LifecycleEvent: 1764582159.0761764, cd1ac331-c146-4eb5-bc53-42a82dd3467b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 09:42:39 compute-0 nova_compute[189491]: 2025-12-01 09:42:39.110 189495 INFO nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: cd1ac331-c146-4eb5-bc53-42a82dd3467b] VM Paused (Lifecycle Event)#033[00m
Dec  1 09:42:39 compute-0 nova_compute[189491]: 2025-12-01 09:42:39.139 189495 DEBUG nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: cd1ac331-c146-4eb5-bc53-42a82dd3467b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 09:42:39 compute-0 nova_compute[189491]: 2025-12-01 09:42:39.148 189495 DEBUG nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: cd1ac331-c146-4eb5-bc53-42a82dd3467b] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  1 09:42:39 compute-0 nova_compute[189491]: 2025-12-01 09:42:39.174 189495 INFO nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: cd1ac331-c146-4eb5-bc53-42a82dd3467b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  1 09:42:39 compute-0 nova_compute[189491]: 2025-12-01 09:42:39.176 189495 DEBUG nova.virt.driver [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] Emitting event <LifecycleEvent: 1764582159.0830312, 38643437-7822-4834-8301-02d3402cad15 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 09:42:39 compute-0 nova_compute[189491]: 2025-12-01 09:42:39.176 189495 INFO nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: 38643437-7822-4834-8301-02d3402cad15] VM Started (Lifecycle Event)#033[00m
Dec  1 09:42:39 compute-0 systemd[1]: Starting libvirt proxy daemon...
Dec  1 09:42:39 compute-0 nova_compute[189491]: 2025-12-01 09:42:39.201 189495 DEBUG nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: 38643437-7822-4834-8301-02d3402cad15] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 09:42:39 compute-0 nova_compute[189491]: 2025-12-01 09:42:39.207 189495 DEBUG nova.virt.driver [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] Emitting event <LifecycleEvent: 1764582159.0834577, 38643437-7822-4834-8301-02d3402cad15 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 09:42:39 compute-0 nova_compute[189491]: 2025-12-01 09:42:39.207 189495 INFO nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: 38643437-7822-4834-8301-02d3402cad15] VM Paused (Lifecycle Event)#033[00m
Dec  1 09:42:39 compute-0 systemd[1]: Started libvirt proxy daemon.
Dec  1 09:42:39 compute-0 nova_compute[189491]: 2025-12-01 09:42:39.239 189495 DEBUG nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: 38643437-7822-4834-8301-02d3402cad15] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 09:42:39 compute-0 nova_compute[189491]: 2025-12-01 09:42:39.244 189495 DEBUG nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: 38643437-7822-4834-8301-02d3402cad15] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  1 09:42:39 compute-0 nova_compute[189491]: 2025-12-01 09:42:39.268 189495 INFO nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: 38643437-7822-4834-8301-02d3402cad15] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  1 09:42:39 compute-0 nova_compute[189491]: 2025-12-01 09:42:39.342 189495 DEBUG nova.compute.manager [req-8dcdee76-e2b8-49cd-874d-6b181187fcfc req-20d4406c-41c5-4c16-846f-29ccd13dc24d ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 38643437-7822-4834-8301-02d3402cad15] Received event network-vif-plugged-7d0f49f6-e0e1-44b1-be36-fa4df3220ddb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 09:42:39 compute-0 nova_compute[189491]: 2025-12-01 09:42:39.342 189495 DEBUG oslo_concurrency.lockutils [req-8dcdee76-e2b8-49cd-874d-6b181187fcfc req-20d4406c-41c5-4c16-846f-29ccd13dc24d ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Acquiring lock "38643437-7822-4834-8301-02d3402cad15-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:42:39 compute-0 nova_compute[189491]: 2025-12-01 09:42:39.342 189495 DEBUG oslo_concurrency.lockutils [req-8dcdee76-e2b8-49cd-874d-6b181187fcfc req-20d4406c-41c5-4c16-846f-29ccd13dc24d ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Lock "38643437-7822-4834-8301-02d3402cad15-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:42:39 compute-0 nova_compute[189491]: 2025-12-01 09:42:39.343 189495 DEBUG oslo_concurrency.lockutils [req-8dcdee76-e2b8-49cd-874d-6b181187fcfc req-20d4406c-41c5-4c16-846f-29ccd13dc24d ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Lock "38643437-7822-4834-8301-02d3402cad15-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:42:39 compute-0 nova_compute[189491]: 2025-12-01 09:42:39.343 189495 DEBUG nova.compute.manager [req-8dcdee76-e2b8-49cd-874d-6b181187fcfc req-20d4406c-41c5-4c16-846f-29ccd13dc24d ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 38643437-7822-4834-8301-02d3402cad15] Processing event network-vif-plugged-7d0f49f6-e0e1-44b1-be36-fa4df3220ddb _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  1 09:42:39 compute-0 nova_compute[189491]: 2025-12-01 09:42:39.343 189495 DEBUG nova.compute.manager [None req-c8442ce9-a144-480d-8614-19a73188c31c 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] [instance: 38643437-7822-4834-8301-02d3402cad15] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  1 09:42:39 compute-0 nova_compute[189491]: 2025-12-01 09:42:39.350 189495 DEBUG nova.virt.driver [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] Emitting event <LifecycleEvent: 1764582159.3496847, 38643437-7822-4834-8301-02d3402cad15 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 09:42:39 compute-0 nova_compute[189491]: 2025-12-01 09:42:39.350 189495 INFO nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: 38643437-7822-4834-8301-02d3402cad15] VM Resumed (Lifecycle Event)#033[00m
Dec  1 09:42:39 compute-0 nova_compute[189491]: 2025-12-01 09:42:39.352 189495 DEBUG nova.virt.libvirt.driver [None req-c8442ce9-a144-480d-8614-19a73188c31c 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] [instance: 38643437-7822-4834-8301-02d3402cad15] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  1 09:42:39 compute-0 nova_compute[189491]: 2025-12-01 09:42:39.358 189495 INFO nova.virt.libvirt.driver [-] [instance: 38643437-7822-4834-8301-02d3402cad15] Instance spawned successfully.#033[00m
Dec  1 09:42:39 compute-0 nova_compute[189491]: 2025-12-01 09:42:39.359 189495 DEBUG nova.virt.libvirt.driver [None req-c8442ce9-a144-480d-8614-19a73188c31c 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] [instance: 38643437-7822-4834-8301-02d3402cad15] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  1 09:42:39 compute-0 nova_compute[189491]: 2025-12-01 09:42:39.392 189495 DEBUG nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: 38643437-7822-4834-8301-02d3402cad15] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 09:42:39 compute-0 nova_compute[189491]: 2025-12-01 09:42:39.397 189495 DEBUG nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: 38643437-7822-4834-8301-02d3402cad15] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  1 09:42:39 compute-0 nova_compute[189491]: 2025-12-01 09:42:39.407 189495 DEBUG nova.virt.libvirt.driver [None req-c8442ce9-a144-480d-8614-19a73188c31c 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] [instance: 38643437-7822-4834-8301-02d3402cad15] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 09:42:39 compute-0 nova_compute[189491]: 2025-12-01 09:42:39.407 189495 DEBUG nova.virt.libvirt.driver [None req-c8442ce9-a144-480d-8614-19a73188c31c 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] [instance: 38643437-7822-4834-8301-02d3402cad15] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 09:42:39 compute-0 nova_compute[189491]: 2025-12-01 09:42:39.408 189495 DEBUG nova.virt.libvirt.driver [None req-c8442ce9-a144-480d-8614-19a73188c31c 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] [instance: 38643437-7822-4834-8301-02d3402cad15] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 09:42:39 compute-0 nova_compute[189491]: 2025-12-01 09:42:39.408 189495 DEBUG nova.virt.libvirt.driver [None req-c8442ce9-a144-480d-8614-19a73188c31c 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] [instance: 38643437-7822-4834-8301-02d3402cad15] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 09:42:39 compute-0 nova_compute[189491]: 2025-12-01 09:42:39.408 189495 DEBUG nova.virt.libvirt.driver [None req-c8442ce9-a144-480d-8614-19a73188c31c 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] [instance: 38643437-7822-4834-8301-02d3402cad15] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 09:42:39 compute-0 nova_compute[189491]: 2025-12-01 09:42:39.409 189495 DEBUG nova.virt.libvirt.driver [None req-c8442ce9-a144-480d-8614-19a73188c31c 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] [instance: 38643437-7822-4834-8301-02d3402cad15] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 09:42:39 compute-0 nova_compute[189491]: 2025-12-01 09:42:39.419 189495 INFO nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: 38643437-7822-4834-8301-02d3402cad15] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  1 09:42:39 compute-0 podman[251968]: 2025-12-01 09:42:39.456529799 +0000 UTC m=+0.088390961 container create 7d5a12f7b100c0cea26b452555a210e0aaa0545797eb00e2e6f29180ad1eaa48 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8f64018c-1aea-4b7d-b0f4-4ea18afaaeb0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec  1 09:42:39 compute-0 nova_compute[189491]: 2025-12-01 09:42:39.479 189495 INFO nova.compute.manager [None req-c8442ce9-a144-480d-8614-19a73188c31c 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] [instance: 38643437-7822-4834-8301-02d3402cad15] Took 10.15 seconds to spawn the instance on the hypervisor.#033[00m
Dec  1 09:42:39 compute-0 nova_compute[189491]: 2025-12-01 09:42:39.479 189495 DEBUG nova.compute.manager [None req-c8442ce9-a144-480d-8614-19a73188c31c 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] [instance: 38643437-7822-4834-8301-02d3402cad15] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 09:42:39 compute-0 podman[251968]: 2025-12-01 09:42:39.414114304 +0000 UTC m=+0.045975486 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  1 09:42:39 compute-0 systemd[1]: Started libpod-conmon-7d5a12f7b100c0cea26b452555a210e0aaa0545797eb00e2e6f29180ad1eaa48.scope.
Dec  1 09:42:39 compute-0 systemd[1]: Started libcrun container.
Dec  1 09:42:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a63e21dff89f098e963c41c67239ba6583cc5294985bc66a33761e56972db1b2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  1 09:42:39 compute-0 nova_compute[189491]: 2025-12-01 09:42:39.573 189495 INFO nova.compute.manager [None req-c8442ce9-a144-480d-8614-19a73188c31c 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] [instance: 38643437-7822-4834-8301-02d3402cad15] Took 10.85 seconds to build instance.#033[00m
Dec  1 09:42:39 compute-0 podman[251968]: 2025-12-01 09:42:39.577446411 +0000 UTC m=+0.209307583 container init 7d5a12f7b100c0cea26b452555a210e0aaa0545797eb00e2e6f29180ad1eaa48 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8f64018c-1aea-4b7d-b0f4-4ea18afaaeb0, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 09:42:39 compute-0 podman[251968]: 2025-12-01 09:42:39.589051009 +0000 UTC m=+0.220912181 container start 7d5a12f7b100c0cea26b452555a210e0aaa0545797eb00e2e6f29180ad1eaa48 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8f64018c-1aea-4b7d-b0f4-4ea18afaaeb0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 09:42:39 compute-0 nova_compute[189491]: 2025-12-01 09:42:39.590 189495 DEBUG oslo_concurrency.lockutils [None req-c8442ce9-a144-480d-8614-19a73188c31c 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] Lock "38643437-7822-4834-8301-02d3402cad15" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.973s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:42:39 compute-0 neutron-haproxy-ovnmeta-8f64018c-1aea-4b7d-b0f4-4ea18afaaeb0[251984]: [NOTICE]   (251988) : New worker (251990) forked
Dec  1 09:42:39 compute-0 neutron-haproxy-ovnmeta-8f64018c-1aea-4b7d-b0f4-4ea18afaaeb0[251984]: [NOTICE]   (251988) : Loading success.
Dec  1 09:42:39 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:42:39.663 106659 INFO neutron.agent.ovn.metadata.agent [-] Port 7284339c-1e96-403f-9c31-171c5b077ec6 in datapath fb3f5f49-3533-4792-93e2-e7e3702e69d4 unbound from our chassis#033[00m
Dec  1 09:42:39 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:42:39.667 106659 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fb3f5f49-3533-4792-93e2-e7e3702e69d4#033[00m
Dec  1 09:42:39 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:42:39.678 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[3861bdca-78ec-43c0-8fea-907d6ed20376]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:42:39 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:42:39.679 106659 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapfb3f5f49-31 in ovnmeta-fb3f5f49-3533-4792-93e2-e7e3702e69d4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  1 09:42:39 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:42:39.681 239818 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapfb3f5f49-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  1 09:42:39 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:42:39.682 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[dd6875a3-d885-44e7-b613-04f39b22e9f1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:42:39 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:42:39.683 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[f56c4f1a-8863-4d33-862a-ba725f1cac27]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:42:39 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:42:39.694 106797 DEBUG oslo.privsep.daemon [-] privsep: reply[47df1fd7-b635-4471-85cb-ebc666e0164c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:42:39 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:42:39.722 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[9aa9b764-a846-46a2-bba1-2823dc47bfe4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:42:39 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:42:39.757 239843 DEBUG oslo.privsep.daemon [-] privsep: reply[d0b7f9d0-4e43-4ec3-9fdd-dda91b687c50]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:42:39 compute-0 systemd-udevd[251885]: Network interface NamePolicy= disabled on kernel command line.
Dec  1 09:42:39 compute-0 NetworkManager[56318]: <info>  [1764582159.7671] manager: (tapfb3f5f49-30): new Veth device (/org/freedesktop/NetworkManager/Devices/39)
Dec  1 09:42:39 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:42:39.765 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[92f3e95d-1f31-4a75-9529-6a811572bc08]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:42:39 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:42:39.802 239843 DEBUG oslo.privsep.daemon [-] privsep: reply[8d9bd19a-f2cb-4d63-92cd-b77e7ee4a185]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:42:39 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:42:39.806 239843 DEBUG oslo.privsep.daemon [-] privsep: reply[dcd15007-2b5c-4e16-b2c2-e8f18c132adc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:42:39 compute-0 NetworkManager[56318]: <info>  [1764582159.8370] device (tapfb3f5f49-30): carrier: link connected
Dec  1 09:42:39 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:42:39.848 239843 DEBUG oslo.privsep.daemon [-] privsep: reply[b95ef236-e4f3-4bdd-82e3-c242319b6b2f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:42:39 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:42:39.868 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[c75a8f44-4264-4894-953b-84ec30966e70]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfb3f5f49-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:68:ff:97'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 23], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 539953, 'reachable_time': 27500, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 252010, 'error': None, 'target': 'ovnmeta-fb3f5f49-3533-4792-93e2-e7e3702e69d4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:42:39 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:42:39.888 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[6d8a6c0d-faa8-47b0-b603-b9227a6e8dec]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe68:ff97'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 539953, 'tstamp': 539953}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 252011, 'error': None, 'target': 'ovnmeta-fb3f5f49-3533-4792-93e2-e7e3702e69d4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:42:39 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:42:39.906 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[86de07b2-a94d-4231-94fd-162fd3d2f513]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfb3f5f49-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:68:ff:97'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 23], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 539953, 'reachable_time': 27500, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 252012, 'error': None, 'target': 'ovnmeta-fb3f5f49-3533-4792-93e2-e7e3702e69d4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:42:39 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:42:39.940 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[8875f572-aa11-44e8-be3e-b064ff359e58]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:42:40 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:42:40.007 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[d7829fe3-3318-4c18-a400-64327483fc90]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:42:40 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:42:40.009 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfb3f5f49-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:42:40 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:42:40.010 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 09:42:40 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:42:40.011 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfb3f5f49-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:42:40 compute-0 NetworkManager[56318]: <info>  [1764582160.0146] manager: (tapfb3f5f49-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/40)
Dec  1 09:42:40 compute-0 nova_compute[189491]: 2025-12-01 09:42:40.014 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:42:40 compute-0 kernel: tapfb3f5f49-30: entered promiscuous mode
Dec  1 09:42:40 compute-0 nova_compute[189491]: 2025-12-01 09:42:40.020 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:42:40 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:42:40.026 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfb3f5f49-30, col_values=(('external_ids', {'iface-id': 'd1863f81-5419-49a7-8ffb-cbd81f25c00d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:42:40 compute-0 ovn_controller[97794]: 2025-12-01T09:42:40Z|00075|binding|INFO|Releasing lport d1863f81-5419-49a7-8ffb-cbd81f25c00d from this chassis (sb_readonly=0)
Dec  1 09:42:40 compute-0 nova_compute[189491]: 2025-12-01 09:42:40.028 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:42:40 compute-0 nova_compute[189491]: 2025-12-01 09:42:40.056 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:42:40 compute-0 nova_compute[189491]: 2025-12-01 09:42:40.056 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:42:40 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:42:40.057 106659 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/fb3f5f49-3533-4792-93e2-e7e3702e69d4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/fb3f5f49-3533-4792-93e2-e7e3702e69d4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  1 09:42:40 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:42:40.059 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[252bb93a-7b4d-4d82-a50d-4a9522c89a0d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:42:40 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:42:40.060 106659 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  1 09:42:40 compute-0 ovn_metadata_agent[106654]: global
Dec  1 09:42:40 compute-0 ovn_metadata_agent[106654]:    log         /dev/log local0 debug
Dec  1 09:42:40 compute-0 ovn_metadata_agent[106654]:    log-tag     haproxy-metadata-proxy-fb3f5f49-3533-4792-93e2-e7e3702e69d4
Dec  1 09:42:40 compute-0 ovn_metadata_agent[106654]:    user        root
Dec  1 09:42:40 compute-0 ovn_metadata_agent[106654]:    group       root
Dec  1 09:42:40 compute-0 ovn_metadata_agent[106654]:    maxconn     1024
Dec  1 09:42:40 compute-0 ovn_metadata_agent[106654]:    pidfile     /var/lib/neutron/external/pids/fb3f5f49-3533-4792-93e2-e7e3702e69d4.pid.haproxy
Dec  1 09:42:40 compute-0 ovn_metadata_agent[106654]:    daemon
Dec  1 09:42:40 compute-0 ovn_metadata_agent[106654]: 
Dec  1 09:42:40 compute-0 ovn_metadata_agent[106654]: defaults
Dec  1 09:42:40 compute-0 ovn_metadata_agent[106654]:    log global
Dec  1 09:42:40 compute-0 ovn_metadata_agent[106654]:    mode http
Dec  1 09:42:40 compute-0 ovn_metadata_agent[106654]:    option httplog
Dec  1 09:42:40 compute-0 ovn_metadata_agent[106654]:    option dontlognull
Dec  1 09:42:40 compute-0 ovn_metadata_agent[106654]:    option http-server-close
Dec  1 09:42:40 compute-0 ovn_metadata_agent[106654]:    option forwardfor
Dec  1 09:42:40 compute-0 ovn_metadata_agent[106654]:    retries                 3
Dec  1 09:42:40 compute-0 ovn_metadata_agent[106654]:    timeout http-request    30s
Dec  1 09:42:40 compute-0 ovn_metadata_agent[106654]:    timeout connect         30s
Dec  1 09:42:40 compute-0 ovn_metadata_agent[106654]:    timeout client          32s
Dec  1 09:42:40 compute-0 ovn_metadata_agent[106654]:    timeout server          32s
Dec  1 09:42:40 compute-0 ovn_metadata_agent[106654]:    timeout http-keep-alive 30s
Dec  1 09:42:40 compute-0 ovn_metadata_agent[106654]: 
Dec  1 09:42:40 compute-0 ovn_metadata_agent[106654]: 
Dec  1 09:42:40 compute-0 ovn_metadata_agent[106654]: listen listener
Dec  1 09:42:40 compute-0 ovn_metadata_agent[106654]:    bind 169.254.169.254:80
Dec  1 09:42:40 compute-0 ovn_metadata_agent[106654]:    server metadata /var/lib/neutron/metadata_proxy
Dec  1 09:42:40 compute-0 ovn_metadata_agent[106654]:    http-request add-header X-OVN-Network-ID fb3f5f49-3533-4792-93e2-e7e3702e69d4
Dec  1 09:42:40 compute-0 ovn_metadata_agent[106654]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  1 09:42:40 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:42:40.060 106659 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-fb3f5f49-3533-4792-93e2-e7e3702e69d4', 'env', 'PROCESS_TAG=haproxy-fb3f5f49-3533-4792-93e2-e7e3702e69d4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/fb3f5f49-3533-4792-93e2-e7e3702e69d4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  1 09:42:40 compute-0 nova_compute[189491]: 2025-12-01 09:42:40.315 189495 DEBUG nova.network.neutron [req-80948ec6-cf95-4bc4-98c9-080195c71b5b req-22e9412f-f33b-4693-862f-a56dfcdfe54e ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 38643437-7822-4834-8301-02d3402cad15] Updated VIF entry in instance network info cache for port 7d0f49f6-e0e1-44b1-be36-fa4df3220ddb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  1 09:42:40 compute-0 nova_compute[189491]: 2025-12-01 09:42:40.316 189495 DEBUG nova.network.neutron [req-80948ec6-cf95-4bc4-98c9-080195c71b5b req-22e9412f-f33b-4693-862f-a56dfcdfe54e ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 38643437-7822-4834-8301-02d3402cad15] Updating instance_info_cache with network_info: [{"id": "7d0f49f6-e0e1-44b1-be36-fa4df3220ddb", "address": "fa:16:3e:ac:0b:ad", "network": {"id": "8f64018c-1aea-4b7d-b0f4-4ea18afaaeb0", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-168730074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d7764856ebb94acbaa0b40cbbf09cb3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d0f49f6-e0", "ovs_interfaceid": "7d0f49f6-e0e1-44b1-be36-fa4df3220ddb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 09:42:40 compute-0 nova_compute[189491]: 2025-12-01 09:42:40.346 189495 DEBUG oslo_concurrency.lockutils [req-80948ec6-cf95-4bc4-98c9-080195c71b5b req-22e9412f-f33b-4693-862f-a56dfcdfe54e ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Releasing lock "refresh_cache-38643437-7822-4834-8301-02d3402cad15" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 09:42:40 compute-0 nova_compute[189491]: 2025-12-01 09:42:40.353 189495 DEBUG nova.network.neutron [req-53ffca92-0924-4cc0-b2e8-40c1d603a909 req-f9934bda-4667-4582-9f65-e1a86748103d ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: cd1ac331-c146-4eb5-bc53-42a82dd3467b] Updated VIF entry in instance network info cache for port 7284339c-1e96-403f-9c31-171c5b077ec6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  1 09:42:40 compute-0 nova_compute[189491]: 2025-12-01 09:42:40.353 189495 DEBUG nova.network.neutron [req-53ffca92-0924-4cc0-b2e8-40c1d603a909 req-f9934bda-4667-4582-9f65-e1a86748103d ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: cd1ac331-c146-4eb5-bc53-42a82dd3467b] Updating instance_info_cache with network_info: [{"id": "7284339c-1e96-403f-9c31-171c5b077ec6", "address": "fa:16:3e:41:37:2d", "network": {"id": "fb3f5f49-3533-4792-93e2-e7e3702e69d4", "bridge": "br-int", "label": "tempest-ServersTestJSON-2118357144-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5cce108434ca43799d8b26b6c7f91b2d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7284339c-1e", "ovs_interfaceid": "7284339c-1e96-403f-9c31-171c5b077ec6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 09:42:40 compute-0 nova_compute[189491]: 2025-12-01 09:42:40.378 189495 DEBUG oslo_concurrency.lockutils [req-53ffca92-0924-4cc0-b2e8-40c1d603a909 req-f9934bda-4667-4582-9f65-e1a86748103d ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Releasing lock "refresh_cache-cd1ac331-c146-4eb5-bc53-42a82dd3467b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 09:42:40 compute-0 podman[252043]: 2025-12-01 09:42:40.508829971 +0000 UTC m=+0.091156151 container create 89b97cedc569ad783f7b511bfc22466caea5ba962ed837ce63220d05105a5079 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fb3f5f49-3533-4792-93e2-e7e3702e69d4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 09:42:40 compute-0 podman[252043]: 2025-12-01 09:42:40.45618859 +0000 UTC m=+0.038514830 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  1 09:42:40 compute-0 systemd[1]: Started libpod-conmon-89b97cedc569ad783f7b511bfc22466caea5ba962ed837ce63220d05105a5079.scope.
Dec  1 09:42:40 compute-0 systemd[1]: Started libcrun container.
Dec  1 09:42:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc85a908754b73213d88d22d36950e2606b8e68ca0e77459b4251d7d50196a8d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  1 09:42:40 compute-0 podman[252043]: 2025-12-01 09:42:40.619099097 +0000 UTC m=+0.201425277 container init 89b97cedc569ad783f7b511bfc22466caea5ba962ed837ce63220d05105a5079 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fb3f5f49-3533-4792-93e2-e7e3702e69d4, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec  1 09:42:40 compute-0 podman[252043]: 2025-12-01 09:42:40.628614713 +0000 UTC m=+0.210940893 container start 89b97cedc569ad783f7b511bfc22466caea5ba962ed837ce63220d05105a5079 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fb3f5f49-3533-4792-93e2-e7e3702e69d4, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125)
Dec  1 09:42:40 compute-0 neutron-haproxy-ovnmeta-fb3f5f49-3533-4792-93e2-e7e3702e69d4[252058]: [NOTICE]   (252062) : New worker (252064) forked
Dec  1 09:42:40 compute-0 neutron-haproxy-ovnmeta-fb3f5f49-3533-4792-93e2-e7e3702e69d4[252058]: [NOTICE]   (252062) : Loading success.
Dec  1 09:42:42 compute-0 nova_compute[189491]: 2025-12-01 09:42:42.115 189495 DEBUG nova.compute.manager [req-168da699-af5b-4f27-a1d4-0d0600da75d0 req-f3f5fb98-0cce-4dd4-b6a5-15efd55a5065 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 38643437-7822-4834-8301-02d3402cad15] Received event network-vif-plugged-7d0f49f6-e0e1-44b1-be36-fa4df3220ddb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 09:42:42 compute-0 nova_compute[189491]: 2025-12-01 09:42:42.115 189495 DEBUG oslo_concurrency.lockutils [req-168da699-af5b-4f27-a1d4-0d0600da75d0 req-f3f5fb98-0cce-4dd4-b6a5-15efd55a5065 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Acquiring lock "38643437-7822-4834-8301-02d3402cad15-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:42:42 compute-0 nova_compute[189491]: 2025-12-01 09:42:42.115 189495 DEBUG oslo_concurrency.lockutils [req-168da699-af5b-4f27-a1d4-0d0600da75d0 req-f3f5fb98-0cce-4dd4-b6a5-15efd55a5065 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Lock "38643437-7822-4834-8301-02d3402cad15-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:42:42 compute-0 nova_compute[189491]: 2025-12-01 09:42:42.115 189495 DEBUG oslo_concurrency.lockutils [req-168da699-af5b-4f27-a1d4-0d0600da75d0 req-f3f5fb98-0cce-4dd4-b6a5-15efd55a5065 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Lock "38643437-7822-4834-8301-02d3402cad15-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:42:42 compute-0 nova_compute[189491]: 2025-12-01 09:42:42.116 189495 DEBUG nova.compute.manager [req-168da699-af5b-4f27-a1d4-0d0600da75d0 req-f3f5fb98-0cce-4dd4-b6a5-15efd55a5065 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 38643437-7822-4834-8301-02d3402cad15] No waiting events found dispatching network-vif-plugged-7d0f49f6-e0e1-44b1-be36-fa4df3220ddb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 09:42:42 compute-0 nova_compute[189491]: 2025-12-01 09:42:42.116 189495 WARNING nova.compute.manager [req-168da699-af5b-4f27-a1d4-0d0600da75d0 req-f3f5fb98-0cce-4dd4-b6a5-15efd55a5065 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 38643437-7822-4834-8301-02d3402cad15] Received unexpected event network-vif-plugged-7d0f49f6-e0e1-44b1-be36-fa4df3220ddb for instance with vm_state active and task_state None.#033[00m
Dec  1 09:42:42 compute-0 nova_compute[189491]: 2025-12-01 09:42:42.116 189495 DEBUG nova.compute.manager [req-168da699-af5b-4f27-a1d4-0d0600da75d0 req-f3f5fb98-0cce-4dd4-b6a5-15efd55a5065 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: cd1ac331-c146-4eb5-bc53-42a82dd3467b] Received event network-vif-plugged-7284339c-1e96-403f-9c31-171c5b077ec6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 09:42:42 compute-0 nova_compute[189491]: 2025-12-01 09:42:42.116 189495 DEBUG oslo_concurrency.lockutils [req-168da699-af5b-4f27-a1d4-0d0600da75d0 req-f3f5fb98-0cce-4dd4-b6a5-15efd55a5065 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Acquiring lock "cd1ac331-c146-4eb5-bc53-42a82dd3467b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:42:42 compute-0 nova_compute[189491]: 2025-12-01 09:42:42.117 189495 DEBUG oslo_concurrency.lockutils [req-168da699-af5b-4f27-a1d4-0d0600da75d0 req-f3f5fb98-0cce-4dd4-b6a5-15efd55a5065 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Lock "cd1ac331-c146-4eb5-bc53-42a82dd3467b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:42:42 compute-0 nova_compute[189491]: 2025-12-01 09:42:42.117 189495 DEBUG oslo_concurrency.lockutils [req-168da699-af5b-4f27-a1d4-0d0600da75d0 req-f3f5fb98-0cce-4dd4-b6a5-15efd55a5065 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Lock "cd1ac331-c146-4eb5-bc53-42a82dd3467b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:42:42 compute-0 nova_compute[189491]: 2025-12-01 09:42:42.117 189495 DEBUG nova.compute.manager [req-168da699-af5b-4f27-a1d4-0d0600da75d0 req-f3f5fb98-0cce-4dd4-b6a5-15efd55a5065 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: cd1ac331-c146-4eb5-bc53-42a82dd3467b] Processing event network-vif-plugged-7284339c-1e96-403f-9c31-171c5b077ec6 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  1 09:42:42 compute-0 nova_compute[189491]: 2025-12-01 09:42:42.117 189495 DEBUG nova.compute.manager [req-168da699-af5b-4f27-a1d4-0d0600da75d0 req-f3f5fb98-0cce-4dd4-b6a5-15efd55a5065 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: cd1ac331-c146-4eb5-bc53-42a82dd3467b] Received event network-vif-plugged-7284339c-1e96-403f-9c31-171c5b077ec6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 09:42:42 compute-0 nova_compute[189491]: 2025-12-01 09:42:42.117 189495 DEBUG oslo_concurrency.lockutils [req-168da699-af5b-4f27-a1d4-0d0600da75d0 req-f3f5fb98-0cce-4dd4-b6a5-15efd55a5065 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Acquiring lock "cd1ac331-c146-4eb5-bc53-42a82dd3467b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:42:42 compute-0 nova_compute[189491]: 2025-12-01 09:42:42.118 189495 DEBUG oslo_concurrency.lockutils [req-168da699-af5b-4f27-a1d4-0d0600da75d0 req-f3f5fb98-0cce-4dd4-b6a5-15efd55a5065 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Lock "cd1ac331-c146-4eb5-bc53-42a82dd3467b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:42:42 compute-0 nova_compute[189491]: 2025-12-01 09:42:42.118 189495 DEBUG oslo_concurrency.lockutils [req-168da699-af5b-4f27-a1d4-0d0600da75d0 req-f3f5fb98-0cce-4dd4-b6a5-15efd55a5065 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Lock "cd1ac331-c146-4eb5-bc53-42a82dd3467b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:42:42 compute-0 nova_compute[189491]: 2025-12-01 09:42:42.118 189495 DEBUG nova.compute.manager [req-168da699-af5b-4f27-a1d4-0d0600da75d0 req-f3f5fb98-0cce-4dd4-b6a5-15efd55a5065 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: cd1ac331-c146-4eb5-bc53-42a82dd3467b] No waiting events found dispatching network-vif-plugged-7284339c-1e96-403f-9c31-171c5b077ec6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 09:42:42 compute-0 nova_compute[189491]: 2025-12-01 09:42:42.118 189495 WARNING nova.compute.manager [req-168da699-af5b-4f27-a1d4-0d0600da75d0 req-f3f5fb98-0cce-4dd4-b6a5-15efd55a5065 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: cd1ac331-c146-4eb5-bc53-42a82dd3467b] Received unexpected event network-vif-plugged-7284339c-1e96-403f-9c31-171c5b077ec6 for instance with vm_state building and task_state spawning.#033[00m
Dec  1 09:42:42 compute-0 nova_compute[189491]: 2025-12-01 09:42:42.119 189495 DEBUG nova.compute.manager [None req-2f0731c4-fe5c-4999-81fd-54e75f32dfb4 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] [instance: cd1ac331-c146-4eb5-bc53-42a82dd3467b] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  1 09:42:42 compute-0 nova_compute[189491]: 2025-12-01 09:42:42.125 189495 DEBUG nova.virt.driver [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] Emitting event <LifecycleEvent: 1764582162.1246517, cd1ac331-c146-4eb5-bc53-42a82dd3467b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 09:42:42 compute-0 nova_compute[189491]: 2025-12-01 09:42:42.125 189495 INFO nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: cd1ac331-c146-4eb5-bc53-42a82dd3467b] VM Resumed (Lifecycle Event)#033[00m
Dec  1 09:42:42 compute-0 nova_compute[189491]: 2025-12-01 09:42:42.128 189495 DEBUG nova.virt.libvirt.driver [None req-2f0731c4-fe5c-4999-81fd-54e75f32dfb4 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] [instance: cd1ac331-c146-4eb5-bc53-42a82dd3467b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  1 09:42:42 compute-0 nova_compute[189491]: 2025-12-01 09:42:42.134 189495 INFO nova.virt.libvirt.driver [-] [instance: cd1ac331-c146-4eb5-bc53-42a82dd3467b] Instance spawned successfully.#033[00m
Dec  1 09:42:42 compute-0 nova_compute[189491]: 2025-12-01 09:42:42.135 189495 DEBUG nova.virt.libvirt.driver [None req-2f0731c4-fe5c-4999-81fd-54e75f32dfb4 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] [instance: cd1ac331-c146-4eb5-bc53-42a82dd3467b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  1 09:42:42 compute-0 nova_compute[189491]: 2025-12-01 09:42:42.156 189495 DEBUG nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: cd1ac331-c146-4eb5-bc53-42a82dd3467b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 09:42:42 compute-0 nova_compute[189491]: 2025-12-01 09:42:42.165 189495 DEBUG nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: cd1ac331-c146-4eb5-bc53-42a82dd3467b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  1 09:42:42 compute-0 nova_compute[189491]: 2025-12-01 09:42:42.171 189495 DEBUG nova.virt.libvirt.driver [None req-2f0731c4-fe5c-4999-81fd-54e75f32dfb4 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] [instance: cd1ac331-c146-4eb5-bc53-42a82dd3467b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 09:42:42 compute-0 nova_compute[189491]: 2025-12-01 09:42:42.172 189495 DEBUG nova.virt.libvirt.driver [None req-2f0731c4-fe5c-4999-81fd-54e75f32dfb4 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] [instance: cd1ac331-c146-4eb5-bc53-42a82dd3467b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 09:42:42 compute-0 nova_compute[189491]: 2025-12-01 09:42:42.172 189495 DEBUG nova.virt.libvirt.driver [None req-2f0731c4-fe5c-4999-81fd-54e75f32dfb4 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] [instance: cd1ac331-c146-4eb5-bc53-42a82dd3467b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 09:42:42 compute-0 nova_compute[189491]: 2025-12-01 09:42:42.172 189495 DEBUG nova.virt.libvirt.driver [None req-2f0731c4-fe5c-4999-81fd-54e75f32dfb4 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] [instance: cd1ac331-c146-4eb5-bc53-42a82dd3467b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 09:42:42 compute-0 nova_compute[189491]: 2025-12-01 09:42:42.173 189495 DEBUG nova.virt.libvirt.driver [None req-2f0731c4-fe5c-4999-81fd-54e75f32dfb4 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] [instance: cd1ac331-c146-4eb5-bc53-42a82dd3467b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 09:42:42 compute-0 nova_compute[189491]: 2025-12-01 09:42:42.173 189495 DEBUG nova.virt.libvirt.driver [None req-2f0731c4-fe5c-4999-81fd-54e75f32dfb4 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] [instance: cd1ac331-c146-4eb5-bc53-42a82dd3467b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 09:42:42 compute-0 nova_compute[189491]: 2025-12-01 09:42:42.185 189495 INFO nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: cd1ac331-c146-4eb5-bc53-42a82dd3467b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  1 09:42:42 compute-0 nova_compute[189491]: 2025-12-01 09:42:42.281 189495 INFO nova.compute.manager [None req-2f0731c4-fe5c-4999-81fd-54e75f32dfb4 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] [instance: cd1ac331-c146-4eb5-bc53-42a82dd3467b] Took 12.69 seconds to spawn the instance on the hypervisor.#033[00m
Dec  1 09:42:42 compute-0 nova_compute[189491]: 2025-12-01 09:42:42.282 189495 DEBUG nova.compute.manager [None req-2f0731c4-fe5c-4999-81fd-54e75f32dfb4 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] [instance: cd1ac331-c146-4eb5-bc53-42a82dd3467b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 09:42:42 compute-0 nova_compute[189491]: 2025-12-01 09:42:42.412 189495 INFO nova.compute.manager [None req-2f0731c4-fe5c-4999-81fd-54e75f32dfb4 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] [instance: cd1ac331-c146-4eb5-bc53-42a82dd3467b] Took 13.45 seconds to build instance.#033[00m
Dec  1 09:42:42 compute-0 nova_compute[189491]: 2025-12-01 09:42:42.583 189495 DEBUG oslo_concurrency.lockutils [None req-2f0731c4-fe5c-4999-81fd-54e75f32dfb4 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] Lock "cd1ac331-c146-4eb5-bc53-42a82dd3467b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.777s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:42:42 compute-0 nova_compute[189491]: 2025-12-01 09:42:42.625 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:42:42 compute-0 nova_compute[189491]: 2025-12-01 09:42:42.713 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:42:42 compute-0 nova_compute[189491]: 2025-12-01 09:42:42.864 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:42:42 compute-0 nova_compute[189491]: 2025-12-01 09:42:42.864 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:42:42 compute-0 nova_compute[189491]: 2025-12-01 09:42:42.865 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:42:42 compute-0 nova_compute[189491]: 2025-12-01 09:42:42.865 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 09:42:43 compute-0 nova_compute[189491]: 2025-12-01 09:42:43.074 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:42:43 compute-0 nova_compute[189491]: 2025-12-01 09:42:43.295 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cd1ac331-c146-4eb5-bc53-42a82dd3467b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:42:43 compute-0 NetworkManager[56318]: <info>  [1764582163.3026] manager: (patch-provnet-67977a6b-d92d-45ee-82d4-e7c8569d3129-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/41)
Dec  1 09:42:43 compute-0 NetworkManager[56318]: <info>  [1764582163.3056] manager: (patch-br-int-to-provnet-67977a6b-d92d-45ee-82d4-e7c8569d3129): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/42)
Dec  1 09:42:43 compute-0 nova_compute[189491]: 2025-12-01 09:42:43.315 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:42:43 compute-0 nova_compute[189491]: 2025-12-01 09:42:43.363 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cd1ac331-c146-4eb5-bc53-42a82dd3467b/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:42:43 compute-0 nova_compute[189491]: 2025-12-01 09:42:43.364 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cd1ac331-c146-4eb5-bc53-42a82dd3467b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:42:43 compute-0 ovn_controller[97794]: 2025-12-01T09:42:43Z|00076|binding|INFO|Releasing lport d1863f81-5419-49a7-8ffb-cbd81f25c00d from this chassis (sb_readonly=0)
Dec  1 09:42:43 compute-0 ovn_controller[97794]: 2025-12-01T09:42:43Z|00077|binding|INFO|Releasing lport 043e8190-2d11-42d5-822a-8b7d16589eb2 from this chassis (sb_readonly=0)
Dec  1 09:42:43 compute-0 nova_compute[189491]: 2025-12-01 09:42:43.390 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:42:43 compute-0 nova_compute[189491]: 2025-12-01 09:42:43.444 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cd1ac331-c146-4eb5-bc53-42a82dd3467b/disk --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:42:43 compute-0 nova_compute[189491]: 2025-12-01 09:42:43.454 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/38643437-7822-4834-8301-02d3402cad15/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:42:43 compute-0 nova_compute[189491]: 2025-12-01 09:42:43.529 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/38643437-7822-4834-8301-02d3402cad15/disk --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:42:43 compute-0 nova_compute[189491]: 2025-12-01 09:42:43.531 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/38643437-7822-4834-8301-02d3402cad15/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:42:43 compute-0 nova_compute[189491]: 2025-12-01 09:42:43.607 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/38643437-7822-4834-8301-02d3402cad15/disk --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:42:44 compute-0 nova_compute[189491]: 2025-12-01 09:42:44.053 189495 WARNING nova.virt.libvirt.driver [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 09:42:44 compute-0 nova_compute[189491]: 2025-12-01 09:42:44.055 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5111MB free_disk=72.33885192871094GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 09:42:44 compute-0 nova_compute[189491]: 2025-12-01 09:42:44.055 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:42:44 compute-0 nova_compute[189491]: 2025-12-01 09:42:44.056 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:42:44 compute-0 nova_compute[189491]: 2025-12-01 09:42:44.336 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Instance 38643437-7822-4834-8301-02d3402cad15 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 09:42:44 compute-0 nova_compute[189491]: 2025-12-01 09:42:44.338 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Instance cd1ac331-c146-4eb5-bc53-42a82dd3467b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 09:42:44 compute-0 nova_compute[189491]: 2025-12-01 09:42:44.339 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 09:42:44 compute-0 nova_compute[189491]: 2025-12-01 09:42:44.340 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 09:42:44 compute-0 nova_compute[189491]: 2025-12-01 09:42:44.361 189495 DEBUG nova.scheduler.client.report [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Refreshing inventories for resource provider 143c7fe7-af1f-477a-978c-6a994d785d98 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  1 09:42:44 compute-0 nova_compute[189491]: 2025-12-01 09:42:44.382 189495 DEBUG nova.scheduler.client.report [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Updating ProviderTree inventory for provider 143c7fe7-af1f-477a-978c-6a994d785d98 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  1 09:42:44 compute-0 nova_compute[189491]: 2025-12-01 09:42:44.383 189495 DEBUG nova.compute.provider_tree [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Updating inventory in ProviderTree for provider 143c7fe7-af1f-477a-978c-6a994d785d98 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  1 09:42:44 compute-0 nova_compute[189491]: 2025-12-01 09:42:44.406 189495 DEBUG nova.scheduler.client.report [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Refreshing aggregate associations for resource provider 143c7fe7-af1f-477a-978c-6a994d785d98, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  1 09:42:44 compute-0 nova_compute[189491]: 2025-12-01 09:42:44.434 189495 DEBUG nova.scheduler.client.report [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Refreshing trait associations for resource provider 143c7fe7-af1f-477a-978c-6a994d785d98, traits: COMPUTE_STORAGE_BUS_FDC,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_FMA3,HW_CPU_X86_SVM,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_AVX,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SHA,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_AVX2,HW_CPU_X86_ABM,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_USB,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_MMX,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE2,COMPUTE_ACCELERATORS,HW_CPU_X86_F16C,HW_CPU_X86_SSE42,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE41,HW_CPU_X86_BMI2,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE4A,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_CIRRUS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  1 09:42:44 compute-0 nova_compute[189491]: 2025-12-01 09:42:44.499 189495 DEBUG nova.compute.provider_tree [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Inventory has not changed in ProviderTree for provider: 143c7fe7-af1f-477a-978c-6a994d785d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 09:42:44 compute-0 nova_compute[189491]: 2025-12-01 09:42:44.514 189495 DEBUG nova.scheduler.client.report [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Inventory has not changed for provider 143c7fe7-af1f-477a-978c-6a994d785d98 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 09:42:44 compute-0 nova_compute[189491]: 2025-12-01 09:42:44.546 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 09:42:44 compute-0 nova_compute[189491]: 2025-12-01 09:42:44.546 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.490s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:42:44 compute-0 nova_compute[189491]: 2025-12-01 09:42:44.575 189495 DEBUG nova.compute.manager [req-84d52e07-2525-4d5c-b042-11f07c634325 req-ff33b6cf-9d52-4b09-96a5-4a75e36fad30 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 38643437-7822-4834-8301-02d3402cad15] Received event network-changed-7d0f49f6-e0e1-44b1-be36-fa4df3220ddb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 09:42:44 compute-0 nova_compute[189491]: 2025-12-01 09:42:44.575 189495 DEBUG nova.compute.manager [req-84d52e07-2525-4d5c-b042-11f07c634325 req-ff33b6cf-9d52-4b09-96a5-4a75e36fad30 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 38643437-7822-4834-8301-02d3402cad15] Refreshing instance network info cache due to event network-changed-7d0f49f6-e0e1-44b1-be36-fa4df3220ddb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  1 09:42:44 compute-0 nova_compute[189491]: 2025-12-01 09:42:44.576 189495 DEBUG oslo_concurrency.lockutils [req-84d52e07-2525-4d5c-b042-11f07c634325 req-ff33b6cf-9d52-4b09-96a5-4a75e36fad30 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Acquiring lock "refresh_cache-38643437-7822-4834-8301-02d3402cad15" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 09:42:44 compute-0 nova_compute[189491]: 2025-12-01 09:42:44.576 189495 DEBUG oslo_concurrency.lockutils [req-84d52e07-2525-4d5c-b042-11f07c634325 req-ff33b6cf-9d52-4b09-96a5-4a75e36fad30 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Acquired lock "refresh_cache-38643437-7822-4834-8301-02d3402cad15" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 09:42:44 compute-0 nova_compute[189491]: 2025-12-01 09:42:44.577 189495 DEBUG nova.network.neutron [req-84d52e07-2525-4d5c-b042-11f07c634325 req-ff33b6cf-9d52-4b09-96a5-4a75e36fad30 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 38643437-7822-4834-8301-02d3402cad15] Refreshing network info cache for port 7d0f49f6-e0e1-44b1-be36-fa4df3220ddb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  1 09:42:45 compute-0 nova_compute[189491]: 2025-12-01 09:42:45.548 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:42:45 compute-0 nova_compute[189491]: 2025-12-01 09:42:45.714 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:42:45 compute-0 nova_compute[189491]: 2025-12-01 09:42:45.716 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:42:46 compute-0 nova_compute[189491]: 2025-12-01 09:42:46.677 189495 DEBUG nova.compute.manager [req-1a038efb-9b89-4d1f-b7d2-075ef5bf1424 req-2d58fe47-d0c8-411f-bad6-c24cdef1961f ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: cd1ac331-c146-4eb5-bc53-42a82dd3467b] Received event network-changed-7284339c-1e96-403f-9c31-171c5b077ec6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 09:42:46 compute-0 nova_compute[189491]: 2025-12-01 09:42:46.678 189495 DEBUG nova.compute.manager [req-1a038efb-9b89-4d1f-b7d2-075ef5bf1424 req-2d58fe47-d0c8-411f-bad6-c24cdef1961f ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: cd1ac331-c146-4eb5-bc53-42a82dd3467b] Refreshing instance network info cache due to event network-changed-7284339c-1e96-403f-9c31-171c5b077ec6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  1 09:42:46 compute-0 nova_compute[189491]: 2025-12-01 09:42:46.679 189495 DEBUG oslo_concurrency.lockutils [req-1a038efb-9b89-4d1f-b7d2-075ef5bf1424 req-2d58fe47-d0c8-411f-bad6-c24cdef1961f ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Acquiring lock "refresh_cache-cd1ac331-c146-4eb5-bc53-42a82dd3467b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 09:42:46 compute-0 nova_compute[189491]: 2025-12-01 09:42:46.679 189495 DEBUG oslo_concurrency.lockutils [req-1a038efb-9b89-4d1f-b7d2-075ef5bf1424 req-2d58fe47-d0c8-411f-bad6-c24cdef1961f ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Acquired lock "refresh_cache-cd1ac331-c146-4eb5-bc53-42a82dd3467b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 09:42:46 compute-0 nova_compute[189491]: 2025-12-01 09:42:46.680 189495 DEBUG nova.network.neutron [req-1a038efb-9b89-4d1f-b7d2-075ef5bf1424 req-2d58fe47-d0c8-411f-bad6-c24cdef1961f ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: cd1ac331-c146-4eb5-bc53-42a82dd3467b] Refreshing network info cache for port 7284339c-1e96-403f-9c31-171c5b077ec6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  1 09:42:47 compute-0 nova_compute[189491]: 2025-12-01 09:42:47.628 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:42:48 compute-0 nova_compute[189491]: 2025-12-01 09:42:48.077 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:42:48 compute-0 nova_compute[189491]: 2025-12-01 09:42:48.340 189495 DEBUG nova.network.neutron [req-84d52e07-2525-4d5c-b042-11f07c634325 req-ff33b6cf-9d52-4b09-96a5-4a75e36fad30 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 38643437-7822-4834-8301-02d3402cad15] Updated VIF entry in instance network info cache for port 7d0f49f6-e0e1-44b1-be36-fa4df3220ddb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  1 09:42:48 compute-0 nova_compute[189491]: 2025-12-01 09:42:48.341 189495 DEBUG nova.network.neutron [req-84d52e07-2525-4d5c-b042-11f07c634325 req-ff33b6cf-9d52-4b09-96a5-4a75e36fad30 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 38643437-7822-4834-8301-02d3402cad15] Updating instance_info_cache with network_info: [{"id": "7d0f49f6-e0e1-44b1-be36-fa4df3220ddb", "address": "fa:16:3e:ac:0b:ad", "network": {"id": "8f64018c-1aea-4b7d-b0f4-4ea18afaaeb0", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-168730074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d7764856ebb94acbaa0b40cbbf09cb3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d0f49f6-e0", "ovs_interfaceid": "7d0f49f6-e0e1-44b1-be36-fa4df3220ddb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 09:42:48 compute-0 nova_compute[189491]: 2025-12-01 09:42:48.364 189495 DEBUG oslo_concurrency.lockutils [req-84d52e07-2525-4d5c-b042-11f07c634325 req-ff33b6cf-9d52-4b09-96a5-4a75e36fad30 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Releasing lock "refresh_cache-38643437-7822-4834-8301-02d3402cad15" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 09:42:48 compute-0 nova_compute[189491]: 2025-12-01 09:42:48.710 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:42:48 compute-0 nova_compute[189491]: 2025-12-01 09:42:48.714 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:42:48 compute-0 nova_compute[189491]: 2025-12-01 09:42:48.715 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:42:48 compute-0 nova_compute[189491]: 2025-12-01 09:42:48.715 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 09:42:49 compute-0 nova_compute[189491]: 2025-12-01 09:42:49.611 189495 DEBUG oslo_concurrency.lockutils [None req-cdaedbc1-c35d-4cd2-bf8e-25dba88e6e00 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] Acquiring lock "cd1ac331-c146-4eb5-bc53-42a82dd3467b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:42:49 compute-0 nova_compute[189491]: 2025-12-01 09:42:49.613 189495 DEBUG oslo_concurrency.lockutils [None req-cdaedbc1-c35d-4cd2-bf8e-25dba88e6e00 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] Lock "cd1ac331-c146-4eb5-bc53-42a82dd3467b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:42:49 compute-0 nova_compute[189491]: 2025-12-01 09:42:49.613 189495 DEBUG oslo_concurrency.lockutils [None req-cdaedbc1-c35d-4cd2-bf8e-25dba88e6e00 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] Acquiring lock "cd1ac331-c146-4eb5-bc53-42a82dd3467b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:42:49 compute-0 nova_compute[189491]: 2025-12-01 09:42:49.613 189495 DEBUG oslo_concurrency.lockutils [None req-cdaedbc1-c35d-4cd2-bf8e-25dba88e6e00 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] Lock "cd1ac331-c146-4eb5-bc53-42a82dd3467b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:42:49 compute-0 nova_compute[189491]: 2025-12-01 09:42:49.614 189495 DEBUG oslo_concurrency.lockutils [None req-cdaedbc1-c35d-4cd2-bf8e-25dba88e6e00 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] Lock "cd1ac331-c146-4eb5-bc53-42a82dd3467b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:42:49 compute-0 nova_compute[189491]: 2025-12-01 09:42:49.616 189495 INFO nova.compute.manager [None req-cdaedbc1-c35d-4cd2-bf8e-25dba88e6e00 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] [instance: cd1ac331-c146-4eb5-bc53-42a82dd3467b] Terminating instance#033[00m
Dec  1 09:42:49 compute-0 nova_compute[189491]: 2025-12-01 09:42:49.617 189495 DEBUG nova.compute.manager [None req-cdaedbc1-c35d-4cd2-bf8e-25dba88e6e00 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] [instance: cd1ac331-c146-4eb5-bc53-42a82dd3467b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  1 09:42:49 compute-0 kernel: tap7284339c-1e (unregistering): left promiscuous mode
Dec  1 09:42:49 compute-0 NetworkManager[56318]: <info>  [1764582169.6642] device (tap7284339c-1e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  1 09:42:49 compute-0 nova_compute[189491]: 2025-12-01 09:42:49.673 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:42:49 compute-0 ovn_controller[97794]: 2025-12-01T09:42:49Z|00078|binding|INFO|Releasing lport 7284339c-1e96-403f-9c31-171c5b077ec6 from this chassis (sb_readonly=0)
Dec  1 09:42:49 compute-0 ovn_controller[97794]: 2025-12-01T09:42:49Z|00079|binding|INFO|Setting lport 7284339c-1e96-403f-9c31-171c5b077ec6 down in Southbound
Dec  1 09:42:49 compute-0 ovn_controller[97794]: 2025-12-01T09:42:49Z|00080|binding|INFO|Removing iface tap7284339c-1e ovn-installed in OVS
Dec  1 09:42:49 compute-0 nova_compute[189491]: 2025-12-01 09:42:49.676 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:42:49 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:42:49.682 106659 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:41:37:2d 10.100.0.6'], port_security=['fa:16:3e:41:37:2d 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'cd1ac331-c146-4eb5-bc53-42a82dd3467b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fb3f5f49-3533-4792-93e2-e7e3702e69d4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5cce108434ca43799d8b26b6c7f91b2d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '5e0448be-5087-4376-919f-cd5e74d4cf16', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.216'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4999a74f-f7a7-4c0a-83f8-e48679ee8417, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff7f12c3670>], logical_port=7284339c-1e96-403f-9c31-171c5b077ec6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff7f12c3670>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 09:42:49 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:42:49.683 106659 INFO neutron.agent.ovn.metadata.agent [-] Port 7284339c-1e96-403f-9c31-171c5b077ec6 in datapath fb3f5f49-3533-4792-93e2-e7e3702e69d4 unbound from our chassis#033[00m
Dec  1 09:42:49 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:42:49.685 106659 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fb3f5f49-3533-4792-93e2-e7e3702e69d4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  1 09:42:49 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:42:49.688 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[69c7b79c-4f3a-4d4e-8b49-1087526dc50c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:42:49 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:42:49.689 106659 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-fb3f5f49-3533-4792-93e2-e7e3702e69d4 namespace which is not needed anymore#033[00m
Dec  1 09:42:49 compute-0 nova_compute[189491]: 2025-12-01 09:42:49.692 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:42:49 compute-0 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000007.scope: Deactivated successfully.
Dec  1 09:42:49 compute-0 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000007.scope: Consumed 8.095s CPU time.
Dec  1 09:42:49 compute-0 systemd-machined[155812]: Machine qemu-7-instance-00000007 terminated.
Dec  1 09:42:49 compute-0 nova_compute[189491]: 2025-12-01 09:42:49.713 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:42:49 compute-0 podman[252088]: 2025-12-01 09:42:49.734388007 +0000 UTC m=+0.101788556 container health_status 6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  1 09:42:49 compute-0 podman[252089]: 2025-12-01 09:42:49.769267926 +0000 UTC m=+0.139754251 container health_status ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec  1 09:42:49 compute-0 neutron-haproxy-ovnmeta-fb3f5f49-3533-4792-93e2-e7e3702e69d4[252058]: [NOTICE]   (252062) : haproxy version is 2.8.14-c23fe91
Dec  1 09:42:49 compute-0 neutron-haproxy-ovnmeta-fb3f5f49-3533-4792-93e2-e7e3702e69d4[252058]: [NOTICE]   (252062) : path to executable is /usr/sbin/haproxy
Dec  1 09:42:49 compute-0 neutron-haproxy-ovnmeta-fb3f5f49-3533-4792-93e2-e7e3702e69d4[252058]: [WARNING]  (252062) : Exiting Master process...
Dec  1 09:42:49 compute-0 neutron-haproxy-ovnmeta-fb3f5f49-3533-4792-93e2-e7e3702e69d4[252058]: [ALERT]    (252062) : Current worker (252064) exited with code 143 (Terminated)
Dec  1 09:42:49 compute-0 neutron-haproxy-ovnmeta-fb3f5f49-3533-4792-93e2-e7e3702e69d4[252058]: [WARNING]  (252062) : All workers exited. Exiting... (0)
Dec  1 09:42:49 compute-0 systemd[1]: libpod-89b97cedc569ad783f7b511bfc22466caea5ba962ed837ce63220d05105a5079.scope: Deactivated successfully.
Dec  1 09:42:49 compute-0 podman[252149]: 2025-12-01 09:42:49.864517267 +0000 UTC m=+0.062056666 container died 89b97cedc569ad783f7b511bfc22466caea5ba962ed837ce63220d05105a5079 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fb3f5f49-3533-4792-93e2-e7e3702e69d4, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 09:42:49 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-89b97cedc569ad783f7b511bfc22466caea5ba962ed837ce63220d05105a5079-userdata-shm.mount: Deactivated successfully.
Dec  1 09:42:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-cc85a908754b73213d88d22d36950e2606b8e68ca0e77459b4251d7d50196a8d-merged.mount: Deactivated successfully.
Dec  1 09:42:49 compute-0 nova_compute[189491]: 2025-12-01 09:42:49.913 189495 INFO nova.virt.libvirt.driver [-] [instance: cd1ac331-c146-4eb5-bc53-42a82dd3467b] Instance destroyed successfully.#033[00m
Dec  1 09:42:49 compute-0 nova_compute[189491]: 2025-12-01 09:42:49.915 189495 DEBUG nova.objects.instance [None req-cdaedbc1-c35d-4cd2-bf8e-25dba88e6e00 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] Lazy-loading 'resources' on Instance uuid cd1ac331-c146-4eb5-bc53-42a82dd3467b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 09:42:49 compute-0 podman[252149]: 2025-12-01 09:42:49.928855109 +0000 UTC m=+0.126394508 container cleanup 89b97cedc569ad783f7b511bfc22466caea5ba962ed837ce63220d05105a5079 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fb3f5f49-3533-4792-93e2-e7e3702e69d4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 09:42:49 compute-0 systemd[1]: libpod-conmon-89b97cedc569ad783f7b511bfc22466caea5ba962ed837ce63220d05105a5079.scope: Deactivated successfully.
Dec  1 09:42:49 compute-0 nova_compute[189491]: 2025-12-01 09:42:49.944 189495 DEBUG nova.virt.libvirt.vif [None req-cdaedbc1-c35d-4cd2-bf8e-25dba88e6e00 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-01T09:42:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-888137098',display_name='tempest-ServersTestJSON-server-888137098',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-888137098',id=7,image_ref='7ddeffd1-d06f-4a46-9e41-114974daa90e',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKxy0LvgSEGh7zwBaw2434lp/oxovhilb1JCymft6bK4mzd1ISmXgDkMZaxgg7D6dffcw3GZtknDnbfCvAemwILwMQYmsaVEKt/CvOpSZ7xtNdZ7yRy8gVklm9AuFP94jQ==',key_name='tempest-keypair-1791515073',keypairs=<?>,launch_index=0,launched_at=2025-12-01T09:42:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5cce108434ca43799d8b26b6c7f91b2d',ramdisk_id='',reservation_id='r-6sspdifp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7ddeffd1-d06f-4a46-9e41-114974daa90e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-1829161533',owner_user_name='tempest-ServersTestJSON-1829161533-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-01T09:42:42Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f4e22b2cefdd467b833f8e2b663a0b75',uuid=cd1ac331-c146-4eb5-bc53-42a82dd3467b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7284339c-1e96-403f-9c31-171c5b077ec6", "address": "fa:16:3e:41:37:2d", "network": {"id": "fb3f5f49-3533-4792-93e2-e7e3702e69d4", "bridge": "br-int", "label": "tempest-ServersTestJSON-2118357144-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5cce108434ca43799d8b26b6c7f91b2d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7284339c-1e", "ovs_interfaceid": "7284339c-1e96-403f-9c31-171c5b077ec6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  1 09:42:49 compute-0 nova_compute[189491]: 2025-12-01 09:42:49.945 189495 DEBUG nova.network.os_vif_util [None req-cdaedbc1-c35d-4cd2-bf8e-25dba88e6e00 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] Converting VIF {"id": "7284339c-1e96-403f-9c31-171c5b077ec6", "address": "fa:16:3e:41:37:2d", "network": {"id": "fb3f5f49-3533-4792-93e2-e7e3702e69d4", "bridge": "br-int", "label": "tempest-ServersTestJSON-2118357144-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5cce108434ca43799d8b26b6c7f91b2d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7284339c-1e", "ovs_interfaceid": "7284339c-1e96-403f-9c31-171c5b077ec6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 09:42:49 compute-0 nova_compute[189491]: 2025-12-01 09:42:49.945 189495 DEBUG nova.network.os_vif_util [None req-cdaedbc1-c35d-4cd2-bf8e-25dba88e6e00 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:41:37:2d,bridge_name='br-int',has_traffic_filtering=True,id=7284339c-1e96-403f-9c31-171c5b077ec6,network=Network(fb3f5f49-3533-4792-93e2-e7e3702e69d4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7284339c-1e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 09:42:49 compute-0 nova_compute[189491]: 2025-12-01 09:42:49.946 189495 DEBUG os_vif [None req-cdaedbc1-c35d-4cd2-bf8e-25dba88e6e00 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:41:37:2d,bridge_name='br-int',has_traffic_filtering=True,id=7284339c-1e96-403f-9c31-171c5b077ec6,network=Network(fb3f5f49-3533-4792-93e2-e7e3702e69d4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7284339c-1e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  1 09:42:49 compute-0 nova_compute[189491]: 2025-12-01 09:42:49.948 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:42:49 compute-0 nova_compute[189491]: 2025-12-01 09:42:49.949 189495 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7284339c-1e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:42:49 compute-0 nova_compute[189491]: 2025-12-01 09:42:49.951 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:42:49 compute-0 nova_compute[189491]: 2025-12-01 09:42:49.954 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  1 09:42:49 compute-0 nova_compute[189491]: 2025-12-01 09:42:49.957 189495 INFO os_vif [None req-cdaedbc1-c35d-4cd2-bf8e-25dba88e6e00 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:41:37:2d,bridge_name='br-int',has_traffic_filtering=True,id=7284339c-1e96-403f-9c31-171c5b077ec6,network=Network(fb3f5f49-3533-4792-93e2-e7e3702e69d4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7284339c-1e')#033[00m
Dec  1 09:42:49 compute-0 nova_compute[189491]: 2025-12-01 09:42:49.958 189495 INFO nova.virt.libvirt.driver [None req-cdaedbc1-c35d-4cd2-bf8e-25dba88e6e00 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] [instance: cd1ac331-c146-4eb5-bc53-42a82dd3467b] Deleting instance files /var/lib/nova/instances/cd1ac331-c146-4eb5-bc53-42a82dd3467b_del#033[00m
Dec  1 09:42:49 compute-0 nova_compute[189491]: 2025-12-01 09:42:49.958 189495 INFO nova.virt.libvirt.driver [None req-cdaedbc1-c35d-4cd2-bf8e-25dba88e6e00 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] [instance: cd1ac331-c146-4eb5-bc53-42a82dd3467b] Deletion of /var/lib/nova/instances/cd1ac331-c146-4eb5-bc53-42a82dd3467b_del complete#033[00m
Dec  1 09:42:49 compute-0 nova_compute[189491]: 2025-12-01 09:42:49.982 189495 DEBUG nova.compute.manager [req-80434cbd-c057-4fd2-8230-c1b7d5bcc558 req-ab06e1cc-7b16-4ec7-bc67-feec950c1ab1 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: cd1ac331-c146-4eb5-bc53-42a82dd3467b] Received event network-vif-unplugged-7284339c-1e96-403f-9c31-171c5b077ec6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 09:42:49 compute-0 nova_compute[189491]: 2025-12-01 09:42:49.982 189495 DEBUG oslo_concurrency.lockutils [req-80434cbd-c057-4fd2-8230-c1b7d5bcc558 req-ab06e1cc-7b16-4ec7-bc67-feec950c1ab1 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Acquiring lock "cd1ac331-c146-4eb5-bc53-42a82dd3467b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:42:49 compute-0 nova_compute[189491]: 2025-12-01 09:42:49.982 189495 DEBUG oslo_concurrency.lockutils [req-80434cbd-c057-4fd2-8230-c1b7d5bcc558 req-ab06e1cc-7b16-4ec7-bc67-feec950c1ab1 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Lock "cd1ac331-c146-4eb5-bc53-42a82dd3467b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:42:49 compute-0 nova_compute[189491]: 2025-12-01 09:42:49.982 189495 DEBUG oslo_concurrency.lockutils [req-80434cbd-c057-4fd2-8230-c1b7d5bcc558 req-ab06e1cc-7b16-4ec7-bc67-feec950c1ab1 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Lock "cd1ac331-c146-4eb5-bc53-42a82dd3467b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:42:49 compute-0 nova_compute[189491]: 2025-12-01 09:42:49.983 189495 DEBUG nova.compute.manager [req-80434cbd-c057-4fd2-8230-c1b7d5bcc558 req-ab06e1cc-7b16-4ec7-bc67-feec950c1ab1 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: cd1ac331-c146-4eb5-bc53-42a82dd3467b] No waiting events found dispatching network-vif-unplugged-7284339c-1e96-403f-9c31-171c5b077ec6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 09:42:49 compute-0 nova_compute[189491]: 2025-12-01 09:42:49.983 189495 DEBUG nova.compute.manager [req-80434cbd-c057-4fd2-8230-c1b7d5bcc558 req-ab06e1cc-7b16-4ec7-bc67-feec950c1ab1 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: cd1ac331-c146-4eb5-bc53-42a82dd3467b] Received event network-vif-unplugged-7284339c-1e96-403f-9c31-171c5b077ec6 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  1 09:42:50 compute-0 podman[252194]: 2025-12-01 09:42:50.005393924 +0000 UTC m=+0.047988215 container remove 89b97cedc569ad783f7b511bfc22466caea5ba962ed837ce63220d05105a5079 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fb3f5f49-3533-4792-93e2-e7e3702e69d4, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  1 09:42:50 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:42:50.016 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[348dbe2b-bd6c-4e68-9f1c-93745d265c02]: (4, ('Mon Dec  1 09:42:49 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-fb3f5f49-3533-4792-93e2-e7e3702e69d4 (89b97cedc569ad783f7b511bfc22466caea5ba962ed837ce63220d05105a5079)\n89b97cedc569ad783f7b511bfc22466caea5ba962ed837ce63220d05105a5079\nMon Dec  1 09:42:49 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-fb3f5f49-3533-4792-93e2-e7e3702e69d4 (89b97cedc569ad783f7b511bfc22466caea5ba962ed837ce63220d05105a5079)\n89b97cedc569ad783f7b511bfc22466caea5ba962ed837ce63220d05105a5079\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:42:50 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:42:50.018 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[7f7852b8-2c63-4b7b-b864-9a295607f124]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:42:50 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:42:50.019 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfb3f5f49-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:42:50 compute-0 kernel: tapfb3f5f49-30: left promiscuous mode
Dec  1 09:42:50 compute-0 nova_compute[189491]: 2025-12-01 09:42:50.020 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:42:50 compute-0 nova_compute[189491]: 2025-12-01 09:42:50.034 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:42:50 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:42:50.036 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[04ecfc30-b4dd-419c-9271-8b71750ce73a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:42:50 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:42:50.048 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[01ba4fa0-2597-4989-a6da-5b7fd44368c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:42:50 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:42:50.049 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[ed23ce9d-2f1d-47c6-b638-d6a5e99d3ffc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:42:50 compute-0 nova_compute[189491]: 2025-12-01 09:42:50.061 189495 INFO nova.compute.manager [None req-cdaedbc1-c35d-4cd2-bf8e-25dba88e6e00 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] [instance: cd1ac331-c146-4eb5-bc53-42a82dd3467b] Took 0.44 seconds to destroy the instance on the hypervisor.#033[00m
Dec  1 09:42:50 compute-0 nova_compute[189491]: 2025-12-01 09:42:50.062 189495 DEBUG oslo.service.loopingcall [None req-cdaedbc1-c35d-4cd2-bf8e-25dba88e6e00 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  1 09:42:50 compute-0 nova_compute[189491]: 2025-12-01 09:42:50.062 189495 DEBUG nova.compute.manager [-] [instance: cd1ac331-c146-4eb5-bc53-42a82dd3467b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  1 09:42:50 compute-0 nova_compute[189491]: 2025-12-01 09:42:50.063 189495 DEBUG nova.network.neutron [-] [instance: cd1ac331-c146-4eb5-bc53-42a82dd3467b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  1 09:42:50 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:42:50.067 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[f998db29-b28c-40ab-9d3f-b98dc4493ddb]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 539944, 'reachable_time': 36843, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 252206, 'error': None, 'target': 'ovnmeta-fb3f5f49-3533-4792-93e2-e7e3702e69d4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:42:50 compute-0 systemd[1]: run-netns-ovnmeta\x2dfb3f5f49\x2d3533\x2d4792\x2d93e2\x2de7e3702e69d4.mount: Deactivated successfully.
Dec  1 09:42:50 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:42:50.074 106797 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-fb3f5f49-3533-4792-93e2-e7e3702e69d4 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  1 09:42:50 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:42:50.074 106797 DEBUG oslo.privsep.daemon [-] privsep: reply[8362e0b9-aae5-4af6-9978-ac21ac38de2a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:42:50 compute-0 nova_compute[189491]: 2025-12-01 09:42:50.499 189495 DEBUG nova.network.neutron [req-1a038efb-9b89-4d1f-b7d2-075ef5bf1424 req-2d58fe47-d0c8-411f-bad6-c24cdef1961f ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: cd1ac331-c146-4eb5-bc53-42a82dd3467b] Updated VIF entry in instance network info cache for port 7284339c-1e96-403f-9c31-171c5b077ec6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  1 09:42:50 compute-0 nova_compute[189491]: 2025-12-01 09:42:50.499 189495 DEBUG nova.network.neutron [req-1a038efb-9b89-4d1f-b7d2-075ef5bf1424 req-2d58fe47-d0c8-411f-bad6-c24cdef1961f ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: cd1ac331-c146-4eb5-bc53-42a82dd3467b] Updating instance_info_cache with network_info: [{"id": "7284339c-1e96-403f-9c31-171c5b077ec6", "address": "fa:16:3e:41:37:2d", "network": {"id": "fb3f5f49-3533-4792-93e2-e7e3702e69d4", "bridge": "br-int", "label": "tempest-ServersTestJSON-2118357144-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5cce108434ca43799d8b26b6c7f91b2d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7284339c-1e", "ovs_interfaceid": "7284339c-1e96-403f-9c31-171c5b077ec6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 09:42:50 compute-0 nova_compute[189491]: 2025-12-01 09:42:50.525 189495 DEBUG oslo_concurrency.lockutils [req-1a038efb-9b89-4d1f-b7d2-075ef5bf1424 req-2d58fe47-d0c8-411f-bad6-c24cdef1961f ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Releasing lock "refresh_cache-cd1ac331-c146-4eb5-bc53-42a82dd3467b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 09:42:51 compute-0 nova_compute[189491]: 2025-12-01 09:42:51.546 189495 DEBUG nova.network.neutron [-] [instance: cd1ac331-c146-4eb5-bc53-42a82dd3467b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 09:42:51 compute-0 nova_compute[189491]: 2025-12-01 09:42:51.563 189495 INFO nova.compute.manager [-] [instance: cd1ac331-c146-4eb5-bc53-42a82dd3467b] Took 1.50 seconds to deallocate network for instance.#033[00m
Dec  1 09:42:51 compute-0 nova_compute[189491]: 2025-12-01 09:42:51.622 189495 DEBUG oslo_concurrency.lockutils [None req-cdaedbc1-c35d-4cd2-bf8e-25dba88e6e00 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:42:51 compute-0 nova_compute[189491]: 2025-12-01 09:42:51.623 189495 DEBUG oslo_concurrency.lockutils [None req-cdaedbc1-c35d-4cd2-bf8e-25dba88e6e00 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:42:51 compute-0 nova_compute[189491]: 2025-12-01 09:42:51.699 189495 DEBUG nova.compute.provider_tree [None req-cdaedbc1-c35d-4cd2-bf8e-25dba88e6e00 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] Inventory has not changed in ProviderTree for provider: 143c7fe7-af1f-477a-978c-6a994d785d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 09:42:51 compute-0 nova_compute[189491]: 2025-12-01 09:42:51.712 189495 DEBUG nova.scheduler.client.report [None req-cdaedbc1-c35d-4cd2-bf8e-25dba88e6e00 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] Inventory has not changed for provider 143c7fe7-af1f-477a-978c-6a994d785d98 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 09:42:51 compute-0 nova_compute[189491]: 2025-12-01 09:42:51.734 189495 DEBUG oslo_concurrency.lockutils [None req-cdaedbc1-c35d-4cd2-bf8e-25dba88e6e00 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.112s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:42:51 compute-0 nova_compute[189491]: 2025-12-01 09:42:51.757 189495 INFO nova.scheduler.client.report [None req-cdaedbc1-c35d-4cd2-bf8e-25dba88e6e00 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] Deleted allocations for instance cd1ac331-c146-4eb5-bc53-42a82dd3467b#033[00m
Dec  1 09:42:51 compute-0 nova_compute[189491]: 2025-12-01 09:42:51.838 189495 DEBUG oslo_concurrency.lockutils [None req-cdaedbc1-c35d-4cd2-bf8e-25dba88e6e00 f4e22b2cefdd467b833f8e2b663a0b75 5cce108434ca43799d8b26b6c7f91b2d - - default default] Lock "cd1ac331-c146-4eb5-bc53-42a82dd3467b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.225s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:42:52 compute-0 nova_compute[189491]: 2025-12-01 09:42:52.247 189495 DEBUG nova.compute.manager [req-aea258d9-d25f-45ab-a441-03e150f613b5 req-e29912d7-2516-482b-a3e6-41d042856291 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: cd1ac331-c146-4eb5-bc53-42a82dd3467b] Received event network-vif-plugged-7284339c-1e96-403f-9c31-171c5b077ec6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 09:42:52 compute-0 nova_compute[189491]: 2025-12-01 09:42:52.247 189495 DEBUG oslo_concurrency.lockutils [req-aea258d9-d25f-45ab-a441-03e150f613b5 req-e29912d7-2516-482b-a3e6-41d042856291 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Acquiring lock "cd1ac331-c146-4eb5-bc53-42a82dd3467b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:42:52 compute-0 nova_compute[189491]: 2025-12-01 09:42:52.247 189495 DEBUG oslo_concurrency.lockutils [req-aea258d9-d25f-45ab-a441-03e150f613b5 req-e29912d7-2516-482b-a3e6-41d042856291 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Lock "cd1ac331-c146-4eb5-bc53-42a82dd3467b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:42:52 compute-0 nova_compute[189491]: 2025-12-01 09:42:52.248 189495 DEBUG oslo_concurrency.lockutils [req-aea258d9-d25f-45ab-a441-03e150f613b5 req-e29912d7-2516-482b-a3e6-41d042856291 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Lock "cd1ac331-c146-4eb5-bc53-42a82dd3467b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:42:52 compute-0 nova_compute[189491]: 2025-12-01 09:42:52.248 189495 DEBUG nova.compute.manager [req-aea258d9-d25f-45ab-a441-03e150f613b5 req-e29912d7-2516-482b-a3e6-41d042856291 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: cd1ac331-c146-4eb5-bc53-42a82dd3467b] No waiting events found dispatching network-vif-plugged-7284339c-1e96-403f-9c31-171c5b077ec6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 09:42:52 compute-0 nova_compute[189491]: 2025-12-01 09:42:52.248 189495 WARNING nova.compute.manager [req-aea258d9-d25f-45ab-a441-03e150f613b5 req-e29912d7-2516-482b-a3e6-41d042856291 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: cd1ac331-c146-4eb5-bc53-42a82dd3467b] Received unexpected event network-vif-plugged-7284339c-1e96-403f-9c31-171c5b077ec6 for instance with vm_state deleted and task_state None.#033[00m
Dec  1 09:42:52 compute-0 nova_compute[189491]: 2025-12-01 09:42:52.248 189495 DEBUG nova.compute.manager [req-aea258d9-d25f-45ab-a441-03e150f613b5 req-e29912d7-2516-482b-a3e6-41d042856291 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: cd1ac331-c146-4eb5-bc53-42a82dd3467b] Received event network-vif-deleted-7284339c-1e96-403f-9c31-171c5b077ec6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 09:42:52 compute-0 nova_compute[189491]: 2025-12-01 09:42:52.729 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:42:52 compute-0 nova_compute[189491]: 2025-12-01 09:42:52.729 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:42:52 compute-0 nova_compute[189491]: 2025-12-01 09:42:52.730 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec  1 09:42:52 compute-0 nova_compute[189491]: 2025-12-01 09:42:52.746 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec  1 09:42:53 compute-0 nova_compute[189491]: 2025-12-01 09:42:53.080 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:42:54 compute-0 nova_compute[189491]: 2025-12-01 09:42:54.096 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:42:54 compute-0 nova_compute[189491]: 2025-12-01 09:42:54.952 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:42:56 compute-0 podman[252207]: 2025-12-01 09:42:56.713580532 +0000 UTC m=+0.087635923 container health_status e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=edpm)
Dec  1 09:42:58 compute-0 nova_compute[189491]: 2025-12-01 09:42:58.083 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:42:58 compute-0 podman[252227]: 2025-12-01 09:42:58.709413356 +0000 UTC m=+0.071994713 container health_status f2d8639519b361376780abb83cb5a5e341c4f412720fb5852b2a4d261a69c359 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, distribution-scope=public, architecture=x86_64, container_name=kepler, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, name=ubi9, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, release-0.7.12=, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Dec  1 09:42:58 compute-0 podman[252226]: 2025-12-01 09:42:58.715876247 +0000 UTC m=+0.082181927 container health_status dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  1 09:42:58 compute-0 ovn_controller[97794]: 2025-12-01T09:42:58Z|00081|binding|INFO|Releasing lport 043e8190-2d11-42d5-822a-8b7d16589eb2 from this chassis (sb_readonly=0)
Dec  1 09:42:58 compute-0 nova_compute[189491]: 2025-12-01 09:42:58.864 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:42:59 compute-0 podman[203700]: time="2025-12-01T09:42:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 09:42:59 compute-0 podman[203700]: @ - - [01/Dec/2025:09:42:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Dec  1 09:42:59 compute-0 podman[203700]: @ - - [01/Dec/2025:09:42:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4804 "" "Go-http-client/1.1"
Dec  1 09:42:59 compute-0 nova_compute[189491]: 2025-12-01 09:42:59.955 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:43:00 compute-0 nova_compute[189491]: 2025-12-01 09:43:00.897 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:43:01 compute-0 openstack_network_exporter[205866]: ERROR   09:43:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:43:01 compute-0 openstack_network_exporter[205866]: ERROR   09:43:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:43:01 compute-0 openstack_network_exporter[205866]: ERROR   09:43:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 09:43:01 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:43:01 compute-0 openstack_network_exporter[205866]: ERROR   09:43:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 09:43:01 compute-0 openstack_network_exporter[205866]: ERROR   09:43:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 09:43:01 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:43:03 compute-0 nova_compute[189491]: 2025-12-01 09:43:03.086 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:43:04 compute-0 nova_compute[189491]: 2025-12-01 09:43:04.910 189495 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764582169.9075484, cd1ac331-c146-4eb5-bc53-42a82dd3467b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 09:43:04 compute-0 nova_compute[189491]: 2025-12-01 09:43:04.910 189495 INFO nova.compute.manager [-] [instance: cd1ac331-c146-4eb5-bc53-42a82dd3467b] VM Stopped (Lifecycle Event)#033[00m
Dec  1 09:43:04 compute-0 nova_compute[189491]: 2025-12-01 09:43:04.938 189495 DEBUG nova.compute.manager [None req-a24baf99-600a-4747-a47e-4b2c5acc23f7 - - - - - -] [instance: cd1ac331-c146-4eb5-bc53-42a82dd3467b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 09:43:04 compute-0 nova_compute[189491]: 2025-12-01 09:43:04.960 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:43:05 compute-0 podman[252267]: 2025-12-01 09:43:05.690414096 +0000 UTC m=+0.065657376 container health_status f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent)
Dec  1 09:43:05 compute-0 podman[252266]: 2025-12-01 09:43:05.713509621 +0000 UTC m=+0.091792946 container health_status 110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, version=9.6, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, config_id=edpm, io.buildah.version=1.33.7, name=ubi9-minimal)
Dec  1 09:43:08 compute-0 nova_compute[189491]: 2025-12-01 09:43:08.088 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:43:08 compute-0 nova_compute[189491]: 2025-12-01 09:43:08.750 189495 DEBUG oslo_concurrency.lockutils [None req-5c359e18-d0d0-4fac-a447-53be09696ac4 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] Acquiring lock "b5a25e93-8e59-4459-a45e-2d1d2d486bbc" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:43:08 compute-0 nova_compute[189491]: 2025-12-01 09:43:08.750 189495 DEBUG oslo_concurrency.lockutils [None req-5c359e18-d0d0-4fac-a447-53be09696ac4 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] Lock "b5a25e93-8e59-4459-a45e-2d1d2d486bbc" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:43:08 compute-0 nova_compute[189491]: 2025-12-01 09:43:08.777 189495 DEBUG nova.compute.manager [None req-5c359e18-d0d0-4fac-a447-53be09696ac4 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] [instance: b5a25e93-8e59-4459-a45e-2d1d2d486bbc] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  1 09:43:08 compute-0 nova_compute[189491]: 2025-12-01 09:43:08.906 189495 DEBUG oslo_concurrency.lockutils [None req-5c359e18-d0d0-4fac-a447-53be09696ac4 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:43:08 compute-0 nova_compute[189491]: 2025-12-01 09:43:08.907 189495 DEBUG oslo_concurrency.lockutils [None req-5c359e18-d0d0-4fac-a447-53be09696ac4 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:43:08 compute-0 nova_compute[189491]: 2025-12-01 09:43:08.921 189495 DEBUG nova.virt.hardware [None req-5c359e18-d0d0-4fac-a447-53be09696ac4 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  1 09:43:08 compute-0 nova_compute[189491]: 2025-12-01 09:43:08.922 189495 INFO nova.compute.claims [None req-5c359e18-d0d0-4fac-a447-53be09696ac4 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] [instance: b5a25e93-8e59-4459-a45e-2d1d2d486bbc] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  1 09:43:09 compute-0 nova_compute[189491]: 2025-12-01 09:43:09.113 189495 DEBUG nova.compute.provider_tree [None req-5c359e18-d0d0-4fac-a447-53be09696ac4 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] Inventory has not changed in ProviderTree for provider: 143c7fe7-af1f-477a-978c-6a994d785d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 09:43:09 compute-0 nova_compute[189491]: 2025-12-01 09:43:09.135 189495 DEBUG nova.scheduler.client.report [None req-5c359e18-d0d0-4fac-a447-53be09696ac4 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] Inventory has not changed for provider 143c7fe7-af1f-477a-978c-6a994d785d98 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 09:43:09 compute-0 nova_compute[189491]: 2025-12-01 09:43:09.183 189495 DEBUG oslo_concurrency.lockutils [None req-5c359e18-d0d0-4fac-a447-53be09696ac4 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.277s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:43:09 compute-0 nova_compute[189491]: 2025-12-01 09:43:09.185 189495 DEBUG nova.compute.manager [None req-5c359e18-d0d0-4fac-a447-53be09696ac4 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] [instance: b5a25e93-8e59-4459-a45e-2d1d2d486bbc] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  1 09:43:09 compute-0 nova_compute[189491]: 2025-12-01 09:43:09.262 189495 DEBUG nova.compute.manager [None req-5c359e18-d0d0-4fac-a447-53be09696ac4 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] [instance: b5a25e93-8e59-4459-a45e-2d1d2d486bbc] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  1 09:43:09 compute-0 nova_compute[189491]: 2025-12-01 09:43:09.263 189495 DEBUG nova.network.neutron [None req-5c359e18-d0d0-4fac-a447-53be09696ac4 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] [instance: b5a25e93-8e59-4459-a45e-2d1d2d486bbc] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  1 09:43:09 compute-0 nova_compute[189491]: 2025-12-01 09:43:09.298 189495 INFO nova.virt.libvirt.driver [None req-5c359e18-d0d0-4fac-a447-53be09696ac4 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] [instance: b5a25e93-8e59-4459-a45e-2d1d2d486bbc] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  1 09:43:09 compute-0 nova_compute[189491]: 2025-12-01 09:43:09.333 189495 DEBUG nova.compute.manager [None req-5c359e18-d0d0-4fac-a447-53be09696ac4 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] [instance: b5a25e93-8e59-4459-a45e-2d1d2d486bbc] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  1 09:43:09 compute-0 nova_compute[189491]: 2025-12-01 09:43:09.438 189495 DEBUG nova.compute.manager [None req-5c359e18-d0d0-4fac-a447-53be09696ac4 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] [instance: b5a25e93-8e59-4459-a45e-2d1d2d486bbc] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  1 09:43:09 compute-0 nova_compute[189491]: 2025-12-01 09:43:09.441 189495 DEBUG nova.virt.libvirt.driver [None req-5c359e18-d0d0-4fac-a447-53be09696ac4 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] [instance: b5a25e93-8e59-4459-a45e-2d1d2d486bbc] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  1 09:43:09 compute-0 nova_compute[189491]: 2025-12-01 09:43:09.442 189495 INFO nova.virt.libvirt.driver [None req-5c359e18-d0d0-4fac-a447-53be09696ac4 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] [instance: b5a25e93-8e59-4459-a45e-2d1d2d486bbc] Creating image(s)#033[00m
Dec  1 09:43:09 compute-0 nova_compute[189491]: 2025-12-01 09:43:09.443 189495 DEBUG oslo_concurrency.lockutils [None req-5c359e18-d0d0-4fac-a447-53be09696ac4 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] Acquiring lock "/var/lib/nova/instances/b5a25e93-8e59-4459-a45e-2d1d2d486bbc/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:43:09 compute-0 nova_compute[189491]: 2025-12-01 09:43:09.443 189495 DEBUG oslo_concurrency.lockutils [None req-5c359e18-d0d0-4fac-a447-53be09696ac4 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] Lock "/var/lib/nova/instances/b5a25e93-8e59-4459-a45e-2d1d2d486bbc/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:43:09 compute-0 nova_compute[189491]: 2025-12-01 09:43:09.444 189495 DEBUG oslo_concurrency.lockutils [None req-5c359e18-d0d0-4fac-a447-53be09696ac4 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] Lock "/var/lib/nova/instances/b5a25e93-8e59-4459-a45e-2d1d2d486bbc/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:43:09 compute-0 nova_compute[189491]: 2025-12-01 09:43:09.465 189495 DEBUG oslo_concurrency.processutils [None req-5c359e18-d0d0-4fac-a447-53be09696ac4 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bfffb0fe9ffb8885e11e7a8e92aeafe5ed4e87fd --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:43:09 compute-0 nova_compute[189491]: 2025-12-01 09:43:09.527 189495 DEBUG oslo_concurrency.processutils [None req-5c359e18-d0d0-4fac-a447-53be09696ac4 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bfffb0fe9ffb8885e11e7a8e92aeafe5ed4e87fd --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:43:09 compute-0 nova_compute[189491]: 2025-12-01 09:43:09.530 189495 DEBUG oslo_concurrency.lockutils [None req-5c359e18-d0d0-4fac-a447-53be09696ac4 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] Acquiring lock "bfffb0fe9ffb8885e11e7a8e92aeafe5ed4e87fd" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:43:09 compute-0 nova_compute[189491]: 2025-12-01 09:43:09.532 189495 DEBUG oslo_concurrency.lockutils [None req-5c359e18-d0d0-4fac-a447-53be09696ac4 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] Lock "bfffb0fe9ffb8885e11e7a8e92aeafe5ed4e87fd" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:43:09 compute-0 nova_compute[189491]: 2025-12-01 09:43:09.551 189495 DEBUG oslo_concurrency.processutils [None req-5c359e18-d0d0-4fac-a447-53be09696ac4 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bfffb0fe9ffb8885e11e7a8e92aeafe5ed4e87fd --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:43:09 compute-0 nova_compute[189491]: 2025-12-01 09:43:09.638 189495 DEBUG oslo_concurrency.processutils [None req-5c359e18-d0d0-4fac-a447-53be09696ac4 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bfffb0fe9ffb8885e11e7a8e92aeafe5ed4e87fd --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:43:09 compute-0 nova_compute[189491]: 2025-12-01 09:43:09.639 189495 DEBUG oslo_concurrency.processutils [None req-5c359e18-d0d0-4fac-a447-53be09696ac4 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/bfffb0fe9ffb8885e11e7a8e92aeafe5ed4e87fd,backing_fmt=raw /var/lib/nova/instances/b5a25e93-8e59-4459-a45e-2d1d2d486bbc/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:43:09 compute-0 nova_compute[189491]: 2025-12-01 09:43:09.690 189495 DEBUG oslo_concurrency.processutils [None req-5c359e18-d0d0-4fac-a447-53be09696ac4 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/bfffb0fe9ffb8885e11e7a8e92aeafe5ed4e87fd,backing_fmt=raw /var/lib/nova/instances/b5a25e93-8e59-4459-a45e-2d1d2d486bbc/disk 1073741824" returned: 0 in 0.051s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:43:09 compute-0 nova_compute[189491]: 2025-12-01 09:43:09.691 189495 DEBUG oslo_concurrency.lockutils [None req-5c359e18-d0d0-4fac-a447-53be09696ac4 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] Lock "bfffb0fe9ffb8885e11e7a8e92aeafe5ed4e87fd" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.159s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:43:09 compute-0 nova_compute[189491]: 2025-12-01 09:43:09.692 189495 DEBUG oslo_concurrency.processutils [None req-5c359e18-d0d0-4fac-a447-53be09696ac4 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bfffb0fe9ffb8885e11e7a8e92aeafe5ed4e87fd --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:43:09 compute-0 podman[252306]: 2025-12-01 09:43:09.702960945 +0000 UTC m=+0.078376313 container health_status 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec  1 09:43:09 compute-0 podman[252307]: 2025-12-01 09:43:09.736537241 +0000 UTC m=+0.108748129 container health_status 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  1 09:43:09 compute-0 nova_compute[189491]: 2025-12-01 09:43:09.767 189495 DEBUG oslo_concurrency.processutils [None req-5c359e18-d0d0-4fac-a447-53be09696ac4 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bfffb0fe9ffb8885e11e7a8e92aeafe5ed4e87fd --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:43:09 compute-0 nova_compute[189491]: 2025-12-01 09:43:09.769 189495 DEBUG nova.virt.disk.api [None req-5c359e18-d0d0-4fac-a447-53be09696ac4 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] Checking if we can resize image /var/lib/nova/instances/b5a25e93-8e59-4459-a45e-2d1d2d486bbc/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Dec  1 09:43:09 compute-0 nova_compute[189491]: 2025-12-01 09:43:09.769 189495 DEBUG oslo_concurrency.processutils [None req-5c359e18-d0d0-4fac-a447-53be09696ac4 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5a25e93-8e59-4459-a45e-2d1d2d486bbc/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:43:09 compute-0 nova_compute[189491]: 2025-12-01 09:43:09.838 189495 DEBUG oslo_concurrency.processutils [None req-5c359e18-d0d0-4fac-a447-53be09696ac4 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5a25e93-8e59-4459-a45e-2d1d2d486bbc/disk --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:43:09 compute-0 nova_compute[189491]: 2025-12-01 09:43:09.839 189495 DEBUG nova.virt.disk.api [None req-5c359e18-d0d0-4fac-a447-53be09696ac4 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] Cannot resize image /var/lib/nova/instances/b5a25e93-8e59-4459-a45e-2d1d2d486bbc/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Dec  1 09:43:09 compute-0 nova_compute[189491]: 2025-12-01 09:43:09.840 189495 DEBUG nova.objects.instance [None req-5c359e18-d0d0-4fac-a447-53be09696ac4 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] Lazy-loading 'migration_context' on Instance uuid b5a25e93-8e59-4459-a45e-2d1d2d486bbc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 09:43:09 compute-0 nova_compute[189491]: 2025-12-01 09:43:09.852 189495 DEBUG nova.policy [None req-5c359e18-d0d0-4fac-a447-53be09696ac4 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7f215f81d0ab4d1fb34e21bf69e390fe', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a5fc8e7c1a854418b0a110cc22e69de0', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  1 09:43:09 compute-0 nova_compute[189491]: 2025-12-01 09:43:09.867 189495 DEBUG nova.virt.libvirt.driver [None req-5c359e18-d0d0-4fac-a447-53be09696ac4 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] [instance: b5a25e93-8e59-4459-a45e-2d1d2d486bbc] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  1 09:43:09 compute-0 nova_compute[189491]: 2025-12-01 09:43:09.869 189495 DEBUG nova.virt.libvirt.driver [None req-5c359e18-d0d0-4fac-a447-53be09696ac4 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] [instance: b5a25e93-8e59-4459-a45e-2d1d2d486bbc] Ensure instance console log exists: /var/lib/nova/instances/b5a25e93-8e59-4459-a45e-2d1d2d486bbc/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  1 09:43:09 compute-0 nova_compute[189491]: 2025-12-01 09:43:09.870 189495 DEBUG oslo_concurrency.lockutils [None req-5c359e18-d0d0-4fac-a447-53be09696ac4 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:43:09 compute-0 nova_compute[189491]: 2025-12-01 09:43:09.870 189495 DEBUG oslo_concurrency.lockutils [None req-5c359e18-d0d0-4fac-a447-53be09696ac4 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:43:09 compute-0 nova_compute[189491]: 2025-12-01 09:43:09.871 189495 DEBUG oslo_concurrency.lockutils [None req-5c359e18-d0d0-4fac-a447-53be09696ac4 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:43:09 compute-0 nova_compute[189491]: 2025-12-01 09:43:09.962 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:43:11 compute-0 nova_compute[189491]: 2025-12-01 09:43:11.916 189495 DEBUG nova.network.neutron [None req-5c359e18-d0d0-4fac-a447-53be09696ac4 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] [instance: b5a25e93-8e59-4459-a45e-2d1d2d486bbc] Successfully created port: 9dc75317-7a9b-4763-9189-4ea68bfc3ccb _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  1 09:43:13 compute-0 nova_compute[189491]: 2025-12-01 09:43:13.091 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:43:14 compute-0 nova_compute[189491]: 2025-12-01 09:43:14.911 189495 DEBUG nova.network.neutron [None req-5c359e18-d0d0-4fac-a447-53be09696ac4 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] [instance: b5a25e93-8e59-4459-a45e-2d1d2d486bbc] Successfully updated port: 9dc75317-7a9b-4763-9189-4ea68bfc3ccb _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  1 09:43:14 compute-0 nova_compute[189491]: 2025-12-01 09:43:14.928 189495 DEBUG oslo_concurrency.lockutils [None req-5c359e18-d0d0-4fac-a447-53be09696ac4 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] Acquiring lock "refresh_cache-b5a25e93-8e59-4459-a45e-2d1d2d486bbc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 09:43:14 compute-0 nova_compute[189491]: 2025-12-01 09:43:14.929 189495 DEBUG oslo_concurrency.lockutils [None req-5c359e18-d0d0-4fac-a447-53be09696ac4 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] Acquired lock "refresh_cache-b5a25e93-8e59-4459-a45e-2d1d2d486bbc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 09:43:14 compute-0 nova_compute[189491]: 2025-12-01 09:43:14.929 189495 DEBUG nova.network.neutron [None req-5c359e18-d0d0-4fac-a447-53be09696ac4 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] [instance: b5a25e93-8e59-4459-a45e-2d1d2d486bbc] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  1 09:43:14 compute-0 nova_compute[189491]: 2025-12-01 09:43:14.966 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:43:15 compute-0 nova_compute[189491]: 2025-12-01 09:43:15.099 189495 DEBUG nova.compute.manager [req-0f7c8da5-cef9-4afa-8c03-4035c8165a9e req-7b1e160f-6486-4427-9b9d-49a5f712b8c3 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: b5a25e93-8e59-4459-a45e-2d1d2d486bbc] Received event network-changed-9dc75317-7a9b-4763-9189-4ea68bfc3ccb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 09:43:15 compute-0 nova_compute[189491]: 2025-12-01 09:43:15.099 189495 DEBUG nova.compute.manager [req-0f7c8da5-cef9-4afa-8c03-4035c8165a9e req-7b1e160f-6486-4427-9b9d-49a5f712b8c3 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: b5a25e93-8e59-4459-a45e-2d1d2d486bbc] Refreshing instance network info cache due to event network-changed-9dc75317-7a9b-4763-9189-4ea68bfc3ccb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  1 09:43:15 compute-0 nova_compute[189491]: 2025-12-01 09:43:15.100 189495 DEBUG oslo_concurrency.lockutils [req-0f7c8da5-cef9-4afa-8c03-4035c8165a9e req-7b1e160f-6486-4427-9b9d-49a5f712b8c3 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Acquiring lock "refresh_cache-b5a25e93-8e59-4459-a45e-2d1d2d486bbc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 09:43:15 compute-0 nova_compute[189491]: 2025-12-01 09:43:15.187 189495 DEBUG nova.network.neutron [None req-5c359e18-d0d0-4fac-a447-53be09696ac4 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] [instance: b5a25e93-8e59-4459-a45e-2d1d2d486bbc] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  1 09:43:15 compute-0 ovn_controller[97794]: 2025-12-01T09:43:15Z|00012|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ac:0b:ad 10.100.0.9
Dec  1 09:43:15 compute-0 ovn_controller[97794]: 2025-12-01T09:43:15Z|00013|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ac:0b:ad 10.100.0.9
Dec  1 09:43:16 compute-0 nova_compute[189491]: 2025-12-01 09:43:16.462 189495 DEBUG nova.network.neutron [None req-5c359e18-d0d0-4fac-a447-53be09696ac4 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] [instance: b5a25e93-8e59-4459-a45e-2d1d2d486bbc] Updating instance_info_cache with network_info: [{"id": "9dc75317-7a9b-4763-9189-4ea68bfc3ccb", "address": "fa:16:3e:81:32:12", "network": {"id": "528d6fcc-4f6c-4000-b20b-6a6d9f6135ea", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1736415669-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a5fc8e7c1a854418b0a110cc22e69de0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9dc75317-7a", "ovs_interfaceid": "9dc75317-7a9b-4763-9189-4ea68bfc3ccb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 09:43:16 compute-0 nova_compute[189491]: 2025-12-01 09:43:16.485 189495 DEBUG oslo_concurrency.lockutils [None req-5c359e18-d0d0-4fac-a447-53be09696ac4 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] Releasing lock "refresh_cache-b5a25e93-8e59-4459-a45e-2d1d2d486bbc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 09:43:16 compute-0 nova_compute[189491]: 2025-12-01 09:43:16.486 189495 DEBUG nova.compute.manager [None req-5c359e18-d0d0-4fac-a447-53be09696ac4 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] [instance: b5a25e93-8e59-4459-a45e-2d1d2d486bbc] Instance network_info: |[{"id": "9dc75317-7a9b-4763-9189-4ea68bfc3ccb", "address": "fa:16:3e:81:32:12", "network": {"id": "528d6fcc-4f6c-4000-b20b-6a6d9f6135ea", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1736415669-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a5fc8e7c1a854418b0a110cc22e69de0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9dc75317-7a", "ovs_interfaceid": "9dc75317-7a9b-4763-9189-4ea68bfc3ccb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  1 09:43:16 compute-0 nova_compute[189491]: 2025-12-01 09:43:16.486 189495 DEBUG oslo_concurrency.lockutils [req-0f7c8da5-cef9-4afa-8c03-4035c8165a9e req-7b1e160f-6486-4427-9b9d-49a5f712b8c3 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Acquired lock "refresh_cache-b5a25e93-8e59-4459-a45e-2d1d2d486bbc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 09:43:16 compute-0 nova_compute[189491]: 2025-12-01 09:43:16.487 189495 DEBUG nova.network.neutron [req-0f7c8da5-cef9-4afa-8c03-4035c8165a9e req-7b1e160f-6486-4427-9b9d-49a5f712b8c3 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: b5a25e93-8e59-4459-a45e-2d1d2d486bbc] Refreshing network info cache for port 9dc75317-7a9b-4763-9189-4ea68bfc3ccb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  1 09:43:16 compute-0 nova_compute[189491]: 2025-12-01 09:43:16.490 189495 DEBUG nova.virt.libvirt.driver [None req-5c359e18-d0d0-4fac-a447-53be09696ac4 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] [instance: b5a25e93-8e59-4459-a45e-2d1d2d486bbc] Start _get_guest_xml network_info=[{"id": "9dc75317-7a9b-4763-9189-4ea68bfc3ccb", "address": "fa:16:3e:81:32:12", "network": {"id": "528d6fcc-4f6c-4000-b20b-6a6d9f6135ea", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1736415669-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a5fc8e7c1a854418b0a110cc22e69de0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9dc75317-7a", "ovs_interfaceid": "9dc75317-7a9b-4763-9189-4ea68bfc3ccb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-01T09:41:33Z,direct_url=<?>,disk_format='qcow2',id=7ddeffd1-d06f-4a46-9e41-114974daa90e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='fac95b8a995a4174bfa966a8d9d9aa01',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-01T09:41:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'guest_format': None, 'encryption_options': None, 'device_type': 'disk', 'encryption_secret_uuid': None, 'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'image_id': '7ddeffd1-d06f-4a46-9e41-114974daa90e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  1 09:43:16 compute-0 nova_compute[189491]: 2025-12-01 09:43:16.498 189495 WARNING nova.virt.libvirt.driver [None req-5c359e18-d0d0-4fac-a447-53be09696ac4 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 09:43:16 compute-0 nova_compute[189491]: 2025-12-01 09:43:16.505 189495 DEBUG nova.virt.libvirt.host [None req-5c359e18-d0d0-4fac-a447-53be09696ac4 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  1 09:43:16 compute-0 nova_compute[189491]: 2025-12-01 09:43:16.506 189495 DEBUG nova.virt.libvirt.host [None req-5c359e18-d0d0-4fac-a447-53be09696ac4 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  1 09:43:16 compute-0 nova_compute[189491]: 2025-12-01 09:43:16.511 189495 DEBUG nova.virt.libvirt.host [None req-5c359e18-d0d0-4fac-a447-53be09696ac4 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  1 09:43:16 compute-0 nova_compute[189491]: 2025-12-01 09:43:16.511 189495 DEBUG nova.virt.libvirt.host [None req-5c359e18-d0d0-4fac-a447-53be09696ac4 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  1 09:43:16 compute-0 nova_compute[189491]: 2025-12-01 09:43:16.512 189495 DEBUG nova.virt.libvirt.driver [None req-5c359e18-d0d0-4fac-a447-53be09696ac4 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  1 09:43:16 compute-0 nova_compute[189491]: 2025-12-01 09:43:16.512 189495 DEBUG nova.virt.hardware [None req-5c359e18-d0d0-4fac-a447-53be09696ac4 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-01T09:41:32Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='422f041c-a187-4aa2-8167-37f3eb0e89c2',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-01T09:41:33Z,direct_url=<?>,disk_format='qcow2',id=7ddeffd1-d06f-4a46-9e41-114974daa90e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='fac95b8a995a4174bfa966a8d9d9aa01',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-01T09:41:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  1 09:43:16 compute-0 nova_compute[189491]: 2025-12-01 09:43:16.513 189495 DEBUG nova.virt.hardware [None req-5c359e18-d0d0-4fac-a447-53be09696ac4 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  1 09:43:16 compute-0 nova_compute[189491]: 2025-12-01 09:43:16.514 189495 DEBUG nova.virt.hardware [None req-5c359e18-d0d0-4fac-a447-53be09696ac4 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  1 09:43:16 compute-0 nova_compute[189491]: 2025-12-01 09:43:16.514 189495 DEBUG nova.virt.hardware [None req-5c359e18-d0d0-4fac-a447-53be09696ac4 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  1 09:43:16 compute-0 nova_compute[189491]: 2025-12-01 09:43:16.514 189495 DEBUG nova.virt.hardware [None req-5c359e18-d0d0-4fac-a447-53be09696ac4 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  1 09:43:16 compute-0 nova_compute[189491]: 2025-12-01 09:43:16.514 189495 DEBUG nova.virt.hardware [None req-5c359e18-d0d0-4fac-a447-53be09696ac4 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  1 09:43:16 compute-0 nova_compute[189491]: 2025-12-01 09:43:16.515 189495 DEBUG nova.virt.hardware [None req-5c359e18-d0d0-4fac-a447-53be09696ac4 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  1 09:43:16 compute-0 nova_compute[189491]: 2025-12-01 09:43:16.515 189495 DEBUG nova.virt.hardware [None req-5c359e18-d0d0-4fac-a447-53be09696ac4 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  1 09:43:16 compute-0 nova_compute[189491]: 2025-12-01 09:43:16.516 189495 DEBUG nova.virt.hardware [None req-5c359e18-d0d0-4fac-a447-53be09696ac4 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  1 09:43:16 compute-0 nova_compute[189491]: 2025-12-01 09:43:16.516 189495 DEBUG nova.virt.hardware [None req-5c359e18-d0d0-4fac-a447-53be09696ac4 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  1 09:43:16 compute-0 nova_compute[189491]: 2025-12-01 09:43:16.516 189495 DEBUG nova.virt.hardware [None req-5c359e18-d0d0-4fac-a447-53be09696ac4 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  1 09:43:16 compute-0 nova_compute[189491]: 2025-12-01 09:43:16.519 189495 DEBUG nova.virt.libvirt.vif [None req-5c359e18-d0d0-4fac-a447-53be09696ac4 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-01T09:43:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-2131740452',display_name='tempest-ServerActionsTestJSON-server-2131740452',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-2131740452',id=8,image_ref='7ddeffd1-d06f-4a46-9e41-114974daa90e',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCQFUVYl1Xqq2gIQN4/eCJ8cnpGKeD2gZ7u/gkHTzBRwJJoku8v2NGbkC1lQIa8TB9NaZUcsSyfv1koauiYvXUFGYORBUpCcLDSn5ClA7+eTQ5bJXZBZqJiWDZmhR8SgRA==',key_name='tempest-keypair-1047797503',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a5fc8e7c1a854418b0a110cc22e69de0',ramdisk_id='',reservation_id='r-k3gqld7r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7ddeffd1-d06f-4a46-9e41-114974daa90e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-253829526',owner_user_name='tempest-ServerActionsTestJSON-253829526-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-01T09:43:09Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7f215f81d0ab4d1fb34e21bf69e390fe',uuid=b5a25e93-8e59-4459-a45e-2d1d2d486bbc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9dc75317-7a9b-4763-9189-4ea68bfc3ccb", "address": "fa:16:3e:81:32:12", "network": {"id": "528d6fcc-4f6c-4000-b20b-6a6d9f6135ea", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1736415669-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a5fc8e7c1a854418b0a110cc22e69de0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9dc75317-7a", "ovs_interfaceid": "9dc75317-7a9b-4763-9189-4ea68bfc3ccb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  1 09:43:16 compute-0 nova_compute[189491]: 2025-12-01 09:43:16.520 189495 DEBUG nova.network.os_vif_util [None req-5c359e18-d0d0-4fac-a447-53be09696ac4 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] Converting VIF {"id": "9dc75317-7a9b-4763-9189-4ea68bfc3ccb", "address": "fa:16:3e:81:32:12", "network": {"id": "528d6fcc-4f6c-4000-b20b-6a6d9f6135ea", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1736415669-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a5fc8e7c1a854418b0a110cc22e69de0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9dc75317-7a", "ovs_interfaceid": "9dc75317-7a9b-4763-9189-4ea68bfc3ccb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 09:43:16 compute-0 nova_compute[189491]: 2025-12-01 09:43:16.521 189495 DEBUG nova.network.os_vif_util [None req-5c359e18-d0d0-4fac-a447-53be09696ac4 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:81:32:12,bridge_name='br-int',has_traffic_filtering=True,id=9dc75317-7a9b-4763-9189-4ea68bfc3ccb,network=Network(528d6fcc-4f6c-4000-b20b-6a6d9f6135ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9dc75317-7a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 09:43:16 compute-0 nova_compute[189491]: 2025-12-01 09:43:16.521 189495 DEBUG nova.objects.instance [None req-5c359e18-d0d0-4fac-a447-53be09696ac4 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] Lazy-loading 'pci_devices' on Instance uuid b5a25e93-8e59-4459-a45e-2d1d2d486bbc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 09:43:16 compute-0 nova_compute[189491]: 2025-12-01 09:43:16.535 189495 DEBUG nova.virt.libvirt.driver [None req-5c359e18-d0d0-4fac-a447-53be09696ac4 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] [instance: b5a25e93-8e59-4459-a45e-2d1d2d486bbc] End _get_guest_xml xml=<domain type="kvm">
Dec  1 09:43:16 compute-0 nova_compute[189491]:  <uuid>b5a25e93-8e59-4459-a45e-2d1d2d486bbc</uuid>
Dec  1 09:43:16 compute-0 nova_compute[189491]:  <name>instance-00000008</name>
Dec  1 09:43:16 compute-0 nova_compute[189491]:  <memory>131072</memory>
Dec  1 09:43:16 compute-0 nova_compute[189491]:  <vcpu>1</vcpu>
Dec  1 09:43:16 compute-0 nova_compute[189491]:  <metadata>
Dec  1 09:43:16 compute-0 nova_compute[189491]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  1 09:43:16 compute-0 nova_compute[189491]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  1 09:43:16 compute-0 nova_compute[189491]:      <nova:name>tempest-ServerActionsTestJSON-server-2131740452</nova:name>
Dec  1 09:43:16 compute-0 nova_compute[189491]:      <nova:creationTime>2025-12-01 09:43:16</nova:creationTime>
Dec  1 09:43:16 compute-0 nova_compute[189491]:      <nova:flavor name="m1.nano">
Dec  1 09:43:16 compute-0 nova_compute[189491]:        <nova:memory>128</nova:memory>
Dec  1 09:43:16 compute-0 nova_compute[189491]:        <nova:disk>1</nova:disk>
Dec  1 09:43:16 compute-0 nova_compute[189491]:        <nova:swap>0</nova:swap>
Dec  1 09:43:16 compute-0 nova_compute[189491]:        <nova:ephemeral>0</nova:ephemeral>
Dec  1 09:43:16 compute-0 nova_compute[189491]:        <nova:vcpus>1</nova:vcpus>
Dec  1 09:43:16 compute-0 nova_compute[189491]:      </nova:flavor>
Dec  1 09:43:16 compute-0 nova_compute[189491]:      <nova:owner>
Dec  1 09:43:16 compute-0 nova_compute[189491]:        <nova:user uuid="7f215f81d0ab4d1fb34e21bf69e390fe">tempest-ServerActionsTestJSON-253829526-project-member</nova:user>
Dec  1 09:43:16 compute-0 nova_compute[189491]:        <nova:project uuid="a5fc8e7c1a854418b0a110cc22e69de0">tempest-ServerActionsTestJSON-253829526</nova:project>
Dec  1 09:43:16 compute-0 nova_compute[189491]:      </nova:owner>
Dec  1 09:43:16 compute-0 nova_compute[189491]:      <nova:root type="image" uuid="7ddeffd1-d06f-4a46-9e41-114974daa90e"/>
Dec  1 09:43:16 compute-0 nova_compute[189491]:      <nova:ports>
Dec  1 09:43:16 compute-0 nova_compute[189491]:        <nova:port uuid="9dc75317-7a9b-4763-9189-4ea68bfc3ccb">
Dec  1 09:43:16 compute-0 nova_compute[189491]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Dec  1 09:43:16 compute-0 nova_compute[189491]:        </nova:port>
Dec  1 09:43:16 compute-0 nova_compute[189491]:      </nova:ports>
Dec  1 09:43:16 compute-0 nova_compute[189491]:    </nova:instance>
Dec  1 09:43:16 compute-0 nova_compute[189491]:  </metadata>
Dec  1 09:43:16 compute-0 nova_compute[189491]:  <sysinfo type="smbios">
Dec  1 09:43:16 compute-0 nova_compute[189491]:    <system>
Dec  1 09:43:16 compute-0 nova_compute[189491]:      <entry name="manufacturer">RDO</entry>
Dec  1 09:43:16 compute-0 nova_compute[189491]:      <entry name="product">OpenStack Compute</entry>
Dec  1 09:43:16 compute-0 nova_compute[189491]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  1 09:43:16 compute-0 nova_compute[189491]:      <entry name="serial">b5a25e93-8e59-4459-a45e-2d1d2d486bbc</entry>
Dec  1 09:43:16 compute-0 nova_compute[189491]:      <entry name="uuid">b5a25e93-8e59-4459-a45e-2d1d2d486bbc</entry>
Dec  1 09:43:16 compute-0 nova_compute[189491]:      <entry name="family">Virtual Machine</entry>
Dec  1 09:43:16 compute-0 nova_compute[189491]:    </system>
Dec  1 09:43:16 compute-0 nova_compute[189491]:  </sysinfo>
Dec  1 09:43:16 compute-0 nova_compute[189491]:  <os>
Dec  1 09:43:16 compute-0 nova_compute[189491]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  1 09:43:16 compute-0 nova_compute[189491]:    <boot dev="hd"/>
Dec  1 09:43:16 compute-0 nova_compute[189491]:    <smbios mode="sysinfo"/>
Dec  1 09:43:16 compute-0 nova_compute[189491]:  </os>
Dec  1 09:43:16 compute-0 nova_compute[189491]:  <features>
Dec  1 09:43:16 compute-0 nova_compute[189491]:    <acpi/>
Dec  1 09:43:16 compute-0 nova_compute[189491]:    <apic/>
Dec  1 09:43:16 compute-0 nova_compute[189491]:    <vmcoreinfo/>
Dec  1 09:43:16 compute-0 nova_compute[189491]:  </features>
Dec  1 09:43:16 compute-0 nova_compute[189491]:  <clock offset="utc">
Dec  1 09:43:16 compute-0 nova_compute[189491]:    <timer name="pit" tickpolicy="delay"/>
Dec  1 09:43:16 compute-0 nova_compute[189491]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  1 09:43:16 compute-0 nova_compute[189491]:    <timer name="hpet" present="no"/>
Dec  1 09:43:16 compute-0 nova_compute[189491]:  </clock>
Dec  1 09:43:16 compute-0 nova_compute[189491]:  <cpu mode="host-model" match="exact">
Dec  1 09:43:16 compute-0 nova_compute[189491]:    <topology sockets="1" cores="1" threads="1"/>
Dec  1 09:43:16 compute-0 nova_compute[189491]:  </cpu>
Dec  1 09:43:16 compute-0 nova_compute[189491]:  <devices>
Dec  1 09:43:16 compute-0 nova_compute[189491]:    <disk type="file" device="disk">
Dec  1 09:43:16 compute-0 nova_compute[189491]:      <driver name="qemu" type="qcow2" cache="none"/>
Dec  1 09:43:16 compute-0 nova_compute[189491]:      <source file="/var/lib/nova/instances/b5a25e93-8e59-4459-a45e-2d1d2d486bbc/disk"/>
Dec  1 09:43:16 compute-0 nova_compute[189491]:      <target dev="vda" bus="virtio"/>
Dec  1 09:43:16 compute-0 nova_compute[189491]:    </disk>
Dec  1 09:43:16 compute-0 nova_compute[189491]:    <disk type="file" device="cdrom">
Dec  1 09:43:16 compute-0 nova_compute[189491]:      <driver name="qemu" type="raw" cache="none"/>
Dec  1 09:43:16 compute-0 nova_compute[189491]:      <source file="/var/lib/nova/instances/b5a25e93-8e59-4459-a45e-2d1d2d486bbc/disk.config"/>
Dec  1 09:43:16 compute-0 nova_compute[189491]:      <target dev="sda" bus="sata"/>
Dec  1 09:43:16 compute-0 nova_compute[189491]:    </disk>
Dec  1 09:43:16 compute-0 nova_compute[189491]:    <interface type="ethernet">
Dec  1 09:43:16 compute-0 nova_compute[189491]:      <mac address="fa:16:3e:81:32:12"/>
Dec  1 09:43:16 compute-0 nova_compute[189491]:      <model type="virtio"/>
Dec  1 09:43:16 compute-0 nova_compute[189491]:      <driver name="vhost" rx_queue_size="512"/>
Dec  1 09:43:16 compute-0 nova_compute[189491]:      <mtu size="1442"/>
Dec  1 09:43:16 compute-0 nova_compute[189491]:      <target dev="tap9dc75317-7a"/>
Dec  1 09:43:16 compute-0 nova_compute[189491]:    </interface>
Dec  1 09:43:16 compute-0 nova_compute[189491]:    <serial type="pty">
Dec  1 09:43:16 compute-0 nova_compute[189491]:      <log file="/var/lib/nova/instances/b5a25e93-8e59-4459-a45e-2d1d2d486bbc/console.log" append="off"/>
Dec  1 09:43:16 compute-0 nova_compute[189491]:    </serial>
Dec  1 09:43:16 compute-0 nova_compute[189491]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  1 09:43:16 compute-0 nova_compute[189491]:    <video>
Dec  1 09:43:16 compute-0 nova_compute[189491]:      <model type="virtio"/>
Dec  1 09:43:16 compute-0 nova_compute[189491]:    </video>
Dec  1 09:43:16 compute-0 nova_compute[189491]:    <input type="tablet" bus="usb"/>
Dec  1 09:43:16 compute-0 nova_compute[189491]:    <rng model="virtio">
Dec  1 09:43:16 compute-0 nova_compute[189491]:      <backend model="random">/dev/urandom</backend>
Dec  1 09:43:16 compute-0 nova_compute[189491]:    </rng>
Dec  1 09:43:16 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root"/>
Dec  1 09:43:16 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:43:16 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:43:16 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:43:16 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:43:16 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:43:16 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:43:16 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:43:16 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:43:16 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:43:16 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:43:16 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:43:16 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:43:16 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:43:16 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:43:16 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:43:16 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:43:16 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:43:16 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:43:16 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:43:16 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:43:16 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:43:16 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:43:16 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:43:16 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:43:16 compute-0 nova_compute[189491]:    <controller type="usb" index="0"/>
Dec  1 09:43:16 compute-0 nova_compute[189491]:    <memballoon model="virtio">
Dec  1 09:43:16 compute-0 nova_compute[189491]:      <stats period="10"/>
Dec  1 09:43:16 compute-0 nova_compute[189491]:    </memballoon>
Dec  1 09:43:16 compute-0 nova_compute[189491]:  </devices>
Dec  1 09:43:16 compute-0 nova_compute[189491]: </domain>
Dec  1 09:43:16 compute-0 nova_compute[189491]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  1 09:43:16 compute-0 nova_compute[189491]: 2025-12-01 09:43:16.536 189495 DEBUG nova.compute.manager [None req-5c359e18-d0d0-4fac-a447-53be09696ac4 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] [instance: b5a25e93-8e59-4459-a45e-2d1d2d486bbc] Preparing to wait for external event network-vif-plugged-9dc75317-7a9b-4763-9189-4ea68bfc3ccb prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  1 09:43:16 compute-0 nova_compute[189491]: 2025-12-01 09:43:16.536 189495 DEBUG oslo_concurrency.lockutils [None req-5c359e18-d0d0-4fac-a447-53be09696ac4 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] Acquiring lock "b5a25e93-8e59-4459-a45e-2d1d2d486bbc-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:43:16 compute-0 nova_compute[189491]: 2025-12-01 09:43:16.536 189495 DEBUG oslo_concurrency.lockutils [None req-5c359e18-d0d0-4fac-a447-53be09696ac4 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] Lock "b5a25e93-8e59-4459-a45e-2d1d2d486bbc-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:43:16 compute-0 nova_compute[189491]: 2025-12-01 09:43:16.536 189495 DEBUG oslo_concurrency.lockutils [None req-5c359e18-d0d0-4fac-a447-53be09696ac4 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] Lock "b5a25e93-8e59-4459-a45e-2d1d2d486bbc-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:43:16 compute-0 nova_compute[189491]: 2025-12-01 09:43:16.537 189495 DEBUG nova.virt.libvirt.vif [None req-5c359e18-d0d0-4fac-a447-53be09696ac4 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-01T09:43:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-2131740452',display_name='tempest-ServerActionsTestJSON-server-2131740452',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-2131740452',id=8,image_ref='7ddeffd1-d06f-4a46-9e41-114974daa90e',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCQFUVYl1Xqq2gIQN4/eCJ8cnpGKeD2gZ7u/gkHTzBRwJJoku8v2NGbkC1lQIa8TB9NaZUcsSyfv1koauiYvXUFGYORBUpCcLDSn5ClA7+eTQ5bJXZBZqJiWDZmhR8SgRA==',key_name='tempest-keypair-1047797503',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a5fc8e7c1a854418b0a110cc22e69de0',ramdisk_id='',reservation_id='r-k3gqld7r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7ddeffd1-d06f-4a46-9e41-114974daa90e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-253829526',owner_user_name='tempest-ServerActionsTestJSON-253829526-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-01T09:43:09Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7f215f81d0ab4d1fb34e21bf69e390fe',uuid=b5a25e93-8e59-4459-a45e-2d1d2d486bbc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9dc75317-7a9b-4763-9189-4ea68bfc3ccb", "address": "fa:16:3e:81:32:12", "network": {"id": "528d6fcc-4f6c-4000-b20b-6a6d9f6135ea", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1736415669-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a5fc8e7c1a854418b0a110cc22e69de0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9dc75317-7a", "ovs_interfaceid": "9dc75317-7a9b-4763-9189-4ea68bfc3ccb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  1 09:43:16 compute-0 nova_compute[189491]: 2025-12-01 09:43:16.537 189495 DEBUG nova.network.os_vif_util [None req-5c359e18-d0d0-4fac-a447-53be09696ac4 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] Converting VIF {"id": "9dc75317-7a9b-4763-9189-4ea68bfc3ccb", "address": "fa:16:3e:81:32:12", "network": {"id": "528d6fcc-4f6c-4000-b20b-6a6d9f6135ea", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1736415669-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a5fc8e7c1a854418b0a110cc22e69de0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9dc75317-7a", "ovs_interfaceid": "9dc75317-7a9b-4763-9189-4ea68bfc3ccb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 09:43:16 compute-0 nova_compute[189491]: 2025-12-01 09:43:16.537 189495 DEBUG nova.network.os_vif_util [None req-5c359e18-d0d0-4fac-a447-53be09696ac4 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:81:32:12,bridge_name='br-int',has_traffic_filtering=True,id=9dc75317-7a9b-4763-9189-4ea68bfc3ccb,network=Network(528d6fcc-4f6c-4000-b20b-6a6d9f6135ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9dc75317-7a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 09:43:16 compute-0 nova_compute[189491]: 2025-12-01 09:43:16.538 189495 DEBUG os_vif [None req-5c359e18-d0d0-4fac-a447-53be09696ac4 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:81:32:12,bridge_name='br-int',has_traffic_filtering=True,id=9dc75317-7a9b-4763-9189-4ea68bfc3ccb,network=Network(528d6fcc-4f6c-4000-b20b-6a6d9f6135ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9dc75317-7a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  1 09:43:16 compute-0 nova_compute[189491]: 2025-12-01 09:43:16.538 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:43:16 compute-0 nova_compute[189491]: 2025-12-01 09:43:16.538 189495 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:43:16 compute-0 nova_compute[189491]: 2025-12-01 09:43:16.539 189495 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 09:43:16 compute-0 nova_compute[189491]: 2025-12-01 09:43:16.541 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:43:16 compute-0 nova_compute[189491]: 2025-12-01 09:43:16.541 189495 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9dc75317-7a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:43:16 compute-0 nova_compute[189491]: 2025-12-01 09:43:16.542 189495 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9dc75317-7a, col_values=(('external_ids', {'iface-id': '9dc75317-7a9b-4763-9189-4ea68bfc3ccb', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:81:32:12', 'vm-uuid': 'b5a25e93-8e59-4459-a45e-2d1d2d486bbc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:43:16 compute-0 nova_compute[189491]: 2025-12-01 09:43:16.543 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:43:16 compute-0 NetworkManager[56318]: <info>  [1764582196.5447] manager: (tap9dc75317-7a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/43)
Dec  1 09:43:16 compute-0 nova_compute[189491]: 2025-12-01 09:43:16.546 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  1 09:43:16 compute-0 nova_compute[189491]: 2025-12-01 09:43:16.552 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:43:16 compute-0 nova_compute[189491]: 2025-12-01 09:43:16.552 189495 INFO os_vif [None req-5c359e18-d0d0-4fac-a447-53be09696ac4 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:81:32:12,bridge_name='br-int',has_traffic_filtering=True,id=9dc75317-7a9b-4763-9189-4ea68bfc3ccb,network=Network(528d6fcc-4f6c-4000-b20b-6a6d9f6135ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9dc75317-7a')#033[00m
Dec  1 09:43:16 compute-0 nova_compute[189491]: 2025-12-01 09:43:16.607 189495 DEBUG nova.virt.libvirt.driver [None req-5c359e18-d0d0-4fac-a447-53be09696ac4 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  1 09:43:16 compute-0 nova_compute[189491]: 2025-12-01 09:43:16.607 189495 DEBUG nova.virt.libvirt.driver [None req-5c359e18-d0d0-4fac-a447-53be09696ac4 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  1 09:43:16 compute-0 nova_compute[189491]: 2025-12-01 09:43:16.607 189495 DEBUG nova.virt.libvirt.driver [None req-5c359e18-d0d0-4fac-a447-53be09696ac4 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] No VIF found with MAC fa:16:3e:81:32:12, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  1 09:43:16 compute-0 nova_compute[189491]: 2025-12-01 09:43:16.608 189495 INFO nova.virt.libvirt.driver [None req-5c359e18-d0d0-4fac-a447-53be09696ac4 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] [instance: b5a25e93-8e59-4459-a45e-2d1d2d486bbc] Using config drive#033[00m
Dec  1 09:43:17 compute-0 nova_compute[189491]: 2025-12-01 09:43:17.270 189495 INFO nova.virt.libvirt.driver [None req-5c359e18-d0d0-4fac-a447-53be09696ac4 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] [instance: b5a25e93-8e59-4459-a45e-2d1d2d486bbc] Creating config drive at /var/lib/nova/instances/b5a25e93-8e59-4459-a45e-2d1d2d486bbc/disk.config#033[00m
Dec  1 09:43:17 compute-0 nova_compute[189491]: 2025-12-01 09:43:17.277 189495 DEBUG oslo_concurrency.processutils [None req-5c359e18-d0d0-4fac-a447-53be09696ac4 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b5a25e93-8e59-4459-a45e-2d1d2d486bbc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpidckogjv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:43:17 compute-0 nova_compute[189491]: 2025-12-01 09:43:17.405 189495 DEBUG oslo_concurrency.processutils [None req-5c359e18-d0d0-4fac-a447-53be09696ac4 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b5a25e93-8e59-4459-a45e-2d1d2d486bbc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpidckogjv" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:43:17 compute-0 kernel: tap9dc75317-7a: entered promiscuous mode
Dec  1 09:43:17 compute-0 NetworkManager[56318]: <info>  [1764582197.4637] manager: (tap9dc75317-7a): new Tun device (/org/freedesktop/NetworkManager/Devices/44)
Dec  1 09:43:17 compute-0 nova_compute[189491]: 2025-12-01 09:43:17.465 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:43:17 compute-0 ovn_controller[97794]: 2025-12-01T09:43:17Z|00082|binding|INFO|Claiming lport 9dc75317-7a9b-4763-9189-4ea68bfc3ccb for this chassis.
Dec  1 09:43:17 compute-0 ovn_controller[97794]: 2025-12-01T09:43:17Z|00083|binding|INFO|9dc75317-7a9b-4763-9189-4ea68bfc3ccb: Claiming fa:16:3e:81:32:12 10.100.0.14
Dec  1 09:43:17 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:43:17.472 106659 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:81:32:12 10.100.0.14'], port_security=['fa:16:3e:81:32:12 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'b5a25e93-8e59-4459-a45e-2d1d2d486bbc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-528d6fcc-4f6c-4000-b20b-6a6d9f6135ea', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a5fc8e7c1a854418b0a110cc22e69de0', 'neutron:revision_number': '2', 'neutron:security_group_ids': '72afbc16-616c-4679-8b1b-dcb1251c5132', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3074f1d2-6f44-4fa9-90f3-bc6399575f2a, chassis=[<ovs.db.idl.Row object at 0x7ff7f12c3670>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff7f12c3670>], logical_port=9dc75317-7a9b-4763-9189-4ea68bfc3ccb) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 09:43:17 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:43:17.473 106659 INFO neutron.agent.ovn.metadata.agent [-] Port 9dc75317-7a9b-4763-9189-4ea68bfc3ccb in datapath 528d6fcc-4f6c-4000-b20b-6a6d9f6135ea bound to our chassis#033[00m
Dec  1 09:43:17 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:43:17.475 106659 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 528d6fcc-4f6c-4000-b20b-6a6d9f6135ea#033[00m
Dec  1 09:43:17 compute-0 ovn_controller[97794]: 2025-12-01T09:43:17Z|00084|binding|INFO|Setting lport 9dc75317-7a9b-4763-9189-4ea68bfc3ccb ovn-installed in OVS
Dec  1 09:43:17 compute-0 ovn_controller[97794]: 2025-12-01T09:43:17Z|00085|binding|INFO|Setting lport 9dc75317-7a9b-4763-9189-4ea68bfc3ccb up in Southbound
Dec  1 09:43:17 compute-0 nova_compute[189491]: 2025-12-01 09:43:17.483 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:43:17 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:43:17.486 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[dec0f4d7-b1bc-43e6-9429-8a4361e9855b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:43:17 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:43:17.487 106659 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap528d6fcc-41 in ovnmeta-528d6fcc-4f6c-4000-b20b-6a6d9f6135ea namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  1 09:43:17 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:43:17.489 239818 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap528d6fcc-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  1 09:43:17 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:43:17.489 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[446762aa-f0fc-4c70-8cc9-3ce877aa1747]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:43:17 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:43:17.491 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[c3ddb0bf-0685-4d81-b941-772facb51a86]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:43:17 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:43:17.505 106797 DEBUG oslo.privsep.daemon [-] privsep: reply[8ff009e1-a103-4c04-936f-9f676ab7fb85]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:43:17 compute-0 systemd-machined[155812]: New machine qemu-8-instance-00000008.
Dec  1 09:43:17 compute-0 systemd[1]: Started Virtual Machine qemu-8-instance-00000008.
Dec  1 09:43:17 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:43:17.532 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[537c981f-c0b6-4299-8e38-aa62998e8d9d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:43:17 compute-0 systemd-udevd[252394]: Network interface NamePolicy= disabled on kernel command line.
Dec  1 09:43:17 compute-0 NetworkManager[56318]: <info>  [1764582197.5590] device (tap9dc75317-7a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  1 09:43:17 compute-0 NetworkManager[56318]: <info>  [1764582197.5610] device (tap9dc75317-7a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  1 09:43:17 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:43:17.570 239843 DEBUG oslo.privsep.daemon [-] privsep: reply[08f3548f-a849-468b-bfb2-63b85969ff69]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:43:17 compute-0 NetworkManager[56318]: <info>  [1764582197.5791] manager: (tap528d6fcc-40): new Veth device (/org/freedesktop/NetworkManager/Devices/45)
Dec  1 09:43:17 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:43:17.578 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[a1af5769-c39c-4aad-8c61-3b983802d971]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:43:17 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:43:17.614 239843 DEBUG oslo.privsep.daemon [-] privsep: reply[df19ef8a-3ec1-49ec-807c-2db426f330ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:43:17 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:43:17.618 239843 DEBUG oslo.privsep.daemon [-] privsep: reply[8e8d4c32-9175-4aed-b456-d27a4808888f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:43:17 compute-0 NetworkManager[56318]: <info>  [1764582197.6437] device (tap528d6fcc-40): carrier: link connected
Dec  1 09:43:17 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:43:17.652 239843 DEBUG oslo.privsep.daemon [-] privsep: reply[09791d96-4ccc-4ef4-ba42-225f1be041f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:43:17 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:43:17.673 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[82e8b070-d4ed-4ad8-a7a9-090f571b4be8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap528d6fcc-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:87:98:ee'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 26], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 543733, 'reachable_time': 36980, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 252424, 'error': None, 'target': 'ovnmeta-528d6fcc-4f6c-4000-b20b-6a6d9f6135ea', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:43:17 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:43:17.689 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[384a7e22-73c9-4c21-b277-d1d0144c46cc]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe87:98ee'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 543733, 'tstamp': 543733}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 252425, 'error': None, 'target': 'ovnmeta-528d6fcc-4f6c-4000-b20b-6a6d9f6135ea', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:43:17 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:43:17.706 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[b216b132-7e1e-40a4-b440-c7539af64c86]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap528d6fcc-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:87:98:ee'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 26], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 543733, 'reachable_time': 36980, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 252426, 'error': None, 'target': 'ovnmeta-528d6fcc-4f6c-4000-b20b-6a6d9f6135ea', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:43:17 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:43:17.736 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[d18ef25b-2896-45f1-b4e2-81875197b260]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:43:17 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:43:17.791 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[3b99e240-b319-488f-afff-648f1d1b0f26]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:43:17 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:43:17.793 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap528d6fcc-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:43:17 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:43:17.793 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 09:43:17 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:43:17.794 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap528d6fcc-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:43:17 compute-0 kernel: tap528d6fcc-40: entered promiscuous mode
Dec  1 09:43:17 compute-0 NetworkManager[56318]: <info>  [1764582197.7978] manager: (tap528d6fcc-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/46)
Dec  1 09:43:17 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:43:17.802 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap528d6fcc-40, col_values=(('external_ids', {'iface-id': '8e3cbcf0-fa9b-4b7e-8d20-6f493c3e3d90'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:43:17 compute-0 ovn_controller[97794]: 2025-12-01T09:43:17Z|00086|binding|INFO|Releasing lport 8e3cbcf0-fa9b-4b7e-8d20-6f493c3e3d90 from this chassis (sb_readonly=0)
Dec  1 09:43:17 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:43:17.808 106659 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/528d6fcc-4f6c-4000-b20b-6a6d9f6135ea.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/528d6fcc-4f6c-4000-b20b-6a6d9f6135ea.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  1 09:43:17 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:43:17.809 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[45ce013e-edb1-4739-8385-0f308bd294ad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:43:17 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:43:17.810 106659 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  1 09:43:17 compute-0 ovn_metadata_agent[106654]: global
Dec  1 09:43:17 compute-0 ovn_metadata_agent[106654]:    log         /dev/log local0 debug
Dec  1 09:43:17 compute-0 ovn_metadata_agent[106654]:    log-tag     haproxy-metadata-proxy-528d6fcc-4f6c-4000-b20b-6a6d9f6135ea
Dec  1 09:43:17 compute-0 ovn_metadata_agent[106654]:    user        root
Dec  1 09:43:17 compute-0 ovn_metadata_agent[106654]:    group       root
Dec  1 09:43:17 compute-0 ovn_metadata_agent[106654]:    maxconn     1024
Dec  1 09:43:17 compute-0 ovn_metadata_agent[106654]:    pidfile     /var/lib/neutron/external/pids/528d6fcc-4f6c-4000-b20b-6a6d9f6135ea.pid.haproxy
Dec  1 09:43:17 compute-0 ovn_metadata_agent[106654]:    daemon
Dec  1 09:43:17 compute-0 ovn_metadata_agent[106654]: 
Dec  1 09:43:17 compute-0 ovn_metadata_agent[106654]: defaults
Dec  1 09:43:17 compute-0 ovn_metadata_agent[106654]:    log global
Dec  1 09:43:17 compute-0 ovn_metadata_agent[106654]:    mode http
Dec  1 09:43:17 compute-0 ovn_metadata_agent[106654]:    option httplog
Dec  1 09:43:17 compute-0 ovn_metadata_agent[106654]:    option dontlognull
Dec  1 09:43:17 compute-0 ovn_metadata_agent[106654]:    option http-server-close
Dec  1 09:43:17 compute-0 ovn_metadata_agent[106654]:    option forwardfor
Dec  1 09:43:17 compute-0 ovn_metadata_agent[106654]:    retries                 3
Dec  1 09:43:17 compute-0 ovn_metadata_agent[106654]:    timeout http-request    30s
Dec  1 09:43:17 compute-0 ovn_metadata_agent[106654]:    timeout connect         30s
Dec  1 09:43:17 compute-0 ovn_metadata_agent[106654]:    timeout client          32s
Dec  1 09:43:17 compute-0 ovn_metadata_agent[106654]:    timeout server          32s
Dec  1 09:43:17 compute-0 ovn_metadata_agent[106654]:    timeout http-keep-alive 30s
Dec  1 09:43:17 compute-0 ovn_metadata_agent[106654]: 
Dec  1 09:43:17 compute-0 ovn_metadata_agent[106654]: 
Dec  1 09:43:17 compute-0 ovn_metadata_agent[106654]: listen listener
Dec  1 09:43:17 compute-0 ovn_metadata_agent[106654]:    bind 169.254.169.254:80
Dec  1 09:43:17 compute-0 ovn_metadata_agent[106654]:    server metadata /var/lib/neutron/metadata_proxy
Dec  1 09:43:17 compute-0 ovn_metadata_agent[106654]:    http-request add-header X-OVN-Network-ID 528d6fcc-4f6c-4000-b20b-6a6d9f6135ea
Dec  1 09:43:17 compute-0 ovn_metadata_agent[106654]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  1 09:43:17 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:43:17.810 106659 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-528d6fcc-4f6c-4000-b20b-6a6d9f6135ea', 'env', 'PROCESS_TAG=haproxy-528d6fcc-4f6c-4000-b20b-6a6d9f6135ea', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/528d6fcc-4f6c-4000-b20b-6a6d9f6135ea.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  1 09:43:17 compute-0 nova_compute[189491]: 2025-12-01 09:43:17.818 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:43:17 compute-0 nova_compute[189491]: 2025-12-01 09:43:17.826 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:43:18 compute-0 nova_compute[189491]: 2025-12-01 09:43:18.093 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:43:18 compute-0 podman[252457]: 2025-12-01 09:43:18.224887511 +0000 UTC m=+0.071041210 container create 85801ec2ddaf3bf41f957ab27f0b434fef45631a0ec3ea69a8772f17bb2cea1c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-528d6fcc-4f6c-4000-b20b-6a6d9f6135ea, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec  1 09:43:18 compute-0 systemd[1]: Started libpod-conmon-85801ec2ddaf3bf41f957ab27f0b434fef45631a0ec3ea69a8772f17bb2cea1c.scope.
Dec  1 09:43:18 compute-0 podman[252457]: 2025-12-01 09:43:18.195339136 +0000 UTC m=+0.041492855 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  1 09:43:18 compute-0 systemd[1]: Started libcrun container.
Dec  1 09:43:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0c3c361249835f8b3ea812ed9f69e217886cce34dfcd9b15355b850a38ad995/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  1 09:43:18 compute-0 podman[252457]: 2025-12-01 09:43:18.331530157 +0000 UTC m=+0.177683886 container init 85801ec2ddaf3bf41f957ab27f0b434fef45631a0ec3ea69a8772f17bb2cea1c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-528d6fcc-4f6c-4000-b20b-6a6d9f6135ea, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS)
Dec  1 09:43:18 compute-0 podman[252457]: 2025-12-01 09:43:18.338763858 +0000 UTC m=+0.184917567 container start 85801ec2ddaf3bf41f957ab27f0b434fef45631a0ec3ea69a8772f17bb2cea1c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-528d6fcc-4f6c-4000-b20b-6a6d9f6135ea, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec  1 09:43:18 compute-0 neutron-haproxy-ovnmeta-528d6fcc-4f6c-4000-b20b-6a6d9f6135ea[252472]: [NOTICE]   (252476) : New worker (252480) forked
Dec  1 09:43:18 compute-0 neutron-haproxy-ovnmeta-528d6fcc-4f6c-4000-b20b-6a6d9f6135ea[252472]: [NOTICE]   (252476) : Loading success.
Dec  1 09:43:18 compute-0 nova_compute[189491]: 2025-12-01 09:43:18.452 189495 DEBUG nova.compute.manager [req-8ef5a055-f4a3-4bc6-9dfe-729e2b9c7c1c req-3df30bbb-32b1-449f-a61f-e23da0909233 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: b5a25e93-8e59-4459-a45e-2d1d2d486bbc] Received event network-vif-plugged-9dc75317-7a9b-4763-9189-4ea68bfc3ccb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 09:43:18 compute-0 nova_compute[189491]: 2025-12-01 09:43:18.452 189495 DEBUG oslo_concurrency.lockutils [req-8ef5a055-f4a3-4bc6-9dfe-729e2b9c7c1c req-3df30bbb-32b1-449f-a61f-e23da0909233 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Acquiring lock "b5a25e93-8e59-4459-a45e-2d1d2d486bbc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:43:18 compute-0 nova_compute[189491]: 2025-12-01 09:43:18.453 189495 DEBUG oslo_concurrency.lockutils [req-8ef5a055-f4a3-4bc6-9dfe-729e2b9c7c1c req-3df30bbb-32b1-449f-a61f-e23da0909233 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Lock "b5a25e93-8e59-4459-a45e-2d1d2d486bbc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:43:18 compute-0 nova_compute[189491]: 2025-12-01 09:43:18.453 189495 DEBUG oslo_concurrency.lockutils [req-8ef5a055-f4a3-4bc6-9dfe-729e2b9c7c1c req-3df30bbb-32b1-449f-a61f-e23da0909233 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Lock "b5a25e93-8e59-4459-a45e-2d1d2d486bbc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:43:18 compute-0 nova_compute[189491]: 2025-12-01 09:43:18.453 189495 DEBUG nova.compute.manager [req-8ef5a055-f4a3-4bc6-9dfe-729e2b9c7c1c req-3df30bbb-32b1-449f-a61f-e23da0909233 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: b5a25e93-8e59-4459-a45e-2d1d2d486bbc] Processing event network-vif-plugged-9dc75317-7a9b-4763-9189-4ea68bfc3ccb _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  1 09:43:18 compute-0 nova_compute[189491]: 2025-12-01 09:43:18.493 189495 DEBUG nova.compute.manager [None req-5c359e18-d0d0-4fac-a447-53be09696ac4 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] [instance: b5a25e93-8e59-4459-a45e-2d1d2d486bbc] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  1 09:43:18 compute-0 nova_compute[189491]: 2025-12-01 09:43:18.493 189495 DEBUG nova.virt.driver [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] Emitting event <LifecycleEvent: 1764582198.492609, b5a25e93-8e59-4459-a45e-2d1d2d486bbc => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 09:43:18 compute-0 nova_compute[189491]: 2025-12-01 09:43:18.494 189495 INFO nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: b5a25e93-8e59-4459-a45e-2d1d2d486bbc] VM Started (Lifecycle Event)#033[00m
Dec  1 09:43:18 compute-0 nova_compute[189491]: 2025-12-01 09:43:18.504 189495 DEBUG nova.virt.libvirt.driver [None req-5c359e18-d0d0-4fac-a447-53be09696ac4 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] [instance: b5a25e93-8e59-4459-a45e-2d1d2d486bbc] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  1 09:43:18 compute-0 nova_compute[189491]: 2025-12-01 09:43:18.509 189495 INFO nova.virt.libvirt.driver [-] [instance: b5a25e93-8e59-4459-a45e-2d1d2d486bbc] Instance spawned successfully.#033[00m
Dec  1 09:43:18 compute-0 nova_compute[189491]: 2025-12-01 09:43:18.509 189495 DEBUG nova.virt.libvirt.driver [None req-5c359e18-d0d0-4fac-a447-53be09696ac4 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] [instance: b5a25e93-8e59-4459-a45e-2d1d2d486bbc] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  1 09:43:18 compute-0 nova_compute[189491]: 2025-12-01 09:43:18.511 189495 DEBUG nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: b5a25e93-8e59-4459-a45e-2d1d2d486bbc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 09:43:18 compute-0 nova_compute[189491]: 2025-12-01 09:43:18.515 189495 DEBUG nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: b5a25e93-8e59-4459-a45e-2d1d2d486bbc] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  1 09:43:18 compute-0 nova_compute[189491]: 2025-12-01 09:43:18.530 189495 DEBUG nova.virt.libvirt.driver [None req-5c359e18-d0d0-4fac-a447-53be09696ac4 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] [instance: b5a25e93-8e59-4459-a45e-2d1d2d486bbc] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 09:43:18 compute-0 nova_compute[189491]: 2025-12-01 09:43:18.530 189495 DEBUG nova.virt.libvirt.driver [None req-5c359e18-d0d0-4fac-a447-53be09696ac4 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] [instance: b5a25e93-8e59-4459-a45e-2d1d2d486bbc] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 09:43:18 compute-0 nova_compute[189491]: 2025-12-01 09:43:18.530 189495 DEBUG nova.virt.libvirt.driver [None req-5c359e18-d0d0-4fac-a447-53be09696ac4 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] [instance: b5a25e93-8e59-4459-a45e-2d1d2d486bbc] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 09:43:18 compute-0 nova_compute[189491]: 2025-12-01 09:43:18.531 189495 DEBUG nova.virt.libvirt.driver [None req-5c359e18-d0d0-4fac-a447-53be09696ac4 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] [instance: b5a25e93-8e59-4459-a45e-2d1d2d486bbc] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 09:43:18 compute-0 nova_compute[189491]: 2025-12-01 09:43:18.531 189495 DEBUG nova.virt.libvirt.driver [None req-5c359e18-d0d0-4fac-a447-53be09696ac4 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] [instance: b5a25e93-8e59-4459-a45e-2d1d2d486bbc] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 09:43:18 compute-0 nova_compute[189491]: 2025-12-01 09:43:18.531 189495 DEBUG nova.virt.libvirt.driver [None req-5c359e18-d0d0-4fac-a447-53be09696ac4 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] [instance: b5a25e93-8e59-4459-a45e-2d1d2d486bbc] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 09:43:18 compute-0 nova_compute[189491]: 2025-12-01 09:43:18.534 189495 INFO nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: b5a25e93-8e59-4459-a45e-2d1d2d486bbc] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  1 09:43:18 compute-0 nova_compute[189491]: 2025-12-01 09:43:18.535 189495 DEBUG nova.virt.driver [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] Emitting event <LifecycleEvent: 1764582198.4927144, b5a25e93-8e59-4459-a45e-2d1d2d486bbc => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 09:43:18 compute-0 nova_compute[189491]: 2025-12-01 09:43:18.535 189495 INFO nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: b5a25e93-8e59-4459-a45e-2d1d2d486bbc] VM Paused (Lifecycle Event)#033[00m
Dec  1 09:43:18 compute-0 nova_compute[189491]: 2025-12-01 09:43:18.570 189495 DEBUG nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: b5a25e93-8e59-4459-a45e-2d1d2d486bbc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 09:43:18 compute-0 nova_compute[189491]: 2025-12-01 09:43:18.575 189495 DEBUG nova.virt.driver [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] Emitting event <LifecycleEvent: 1764582198.4982855, b5a25e93-8e59-4459-a45e-2d1d2d486bbc => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 09:43:18 compute-0 nova_compute[189491]: 2025-12-01 09:43:18.575 189495 INFO nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: b5a25e93-8e59-4459-a45e-2d1d2d486bbc] VM Resumed (Lifecycle Event)#033[00m
Dec  1 09:43:18 compute-0 nova_compute[189491]: 2025-12-01 09:43:18.595 189495 INFO nova.compute.manager [None req-5c359e18-d0d0-4fac-a447-53be09696ac4 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] [instance: b5a25e93-8e59-4459-a45e-2d1d2d486bbc] Took 9.16 seconds to spawn the instance on the hypervisor.#033[00m
Dec  1 09:43:18 compute-0 nova_compute[189491]: 2025-12-01 09:43:18.595 189495 DEBUG nova.compute.manager [None req-5c359e18-d0d0-4fac-a447-53be09696ac4 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] [instance: b5a25e93-8e59-4459-a45e-2d1d2d486bbc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 09:43:18 compute-0 nova_compute[189491]: 2025-12-01 09:43:18.597 189495 DEBUG nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: b5a25e93-8e59-4459-a45e-2d1d2d486bbc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 09:43:18 compute-0 nova_compute[189491]: 2025-12-01 09:43:18.607 189495 DEBUG nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: b5a25e93-8e59-4459-a45e-2d1d2d486bbc] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  1 09:43:18 compute-0 nova_compute[189491]: 2025-12-01 09:43:18.762 189495 INFO nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: b5a25e93-8e59-4459-a45e-2d1d2d486bbc] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  1 09:43:18 compute-0 nova_compute[189491]: 2025-12-01 09:43:18.858 189495 DEBUG nova.network.neutron [req-0f7c8da5-cef9-4afa-8c03-4035c8165a9e req-7b1e160f-6486-4427-9b9d-49a5f712b8c3 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: b5a25e93-8e59-4459-a45e-2d1d2d486bbc] Updated VIF entry in instance network info cache for port 9dc75317-7a9b-4763-9189-4ea68bfc3ccb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  1 09:43:18 compute-0 nova_compute[189491]: 2025-12-01 09:43:18.858 189495 DEBUG nova.network.neutron [req-0f7c8da5-cef9-4afa-8c03-4035c8165a9e req-7b1e160f-6486-4427-9b9d-49a5f712b8c3 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: b5a25e93-8e59-4459-a45e-2d1d2d486bbc] Updating instance_info_cache with network_info: [{"id": "9dc75317-7a9b-4763-9189-4ea68bfc3ccb", "address": "fa:16:3e:81:32:12", "network": {"id": "528d6fcc-4f6c-4000-b20b-6a6d9f6135ea", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1736415669-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a5fc8e7c1a854418b0a110cc22e69de0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9dc75317-7a", "ovs_interfaceid": "9dc75317-7a9b-4763-9189-4ea68bfc3ccb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 09:43:19 compute-0 nova_compute[189491]: 2025-12-01 09:43:19.050 189495 DEBUG oslo_concurrency.lockutils [req-0f7c8da5-cef9-4afa-8c03-4035c8165a9e req-7b1e160f-6486-4427-9b9d-49a5f712b8c3 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Releasing lock "refresh_cache-b5a25e93-8e59-4459-a45e-2d1d2d486bbc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 09:43:19 compute-0 nova_compute[189491]: 2025-12-01 09:43:19.052 189495 INFO nova.compute.manager [None req-5c359e18-d0d0-4fac-a447-53be09696ac4 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] [instance: b5a25e93-8e59-4459-a45e-2d1d2d486bbc] Took 10.19 seconds to build instance.#033[00m
Dec  1 09:43:19 compute-0 nova_compute[189491]: 2025-12-01 09:43:19.071 189495 DEBUG oslo_concurrency.lockutils [None req-5c359e18-d0d0-4fac-a447-53be09696ac4 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] Lock "b5a25e93-8e59-4459-a45e-2d1d2d486bbc" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.321s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:43:19 compute-0 nova_compute[189491]: 2025-12-01 09:43:19.618 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:43:20 compute-0 nova_compute[189491]: 2025-12-01 09:43:20.568 189495 DEBUG nova.compute.manager [req-388c8aff-1da8-4ab7-9f63-719479fda9c4 req-a10c785a-c359-48dc-80be-8c15d9fd2941 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: b5a25e93-8e59-4459-a45e-2d1d2d486bbc] Received event network-vif-plugged-9dc75317-7a9b-4763-9189-4ea68bfc3ccb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 09:43:20 compute-0 nova_compute[189491]: 2025-12-01 09:43:20.569 189495 DEBUG oslo_concurrency.lockutils [req-388c8aff-1da8-4ab7-9f63-719479fda9c4 req-a10c785a-c359-48dc-80be-8c15d9fd2941 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Acquiring lock "b5a25e93-8e59-4459-a45e-2d1d2d486bbc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:43:20 compute-0 nova_compute[189491]: 2025-12-01 09:43:20.569 189495 DEBUG oslo_concurrency.lockutils [req-388c8aff-1da8-4ab7-9f63-719479fda9c4 req-a10c785a-c359-48dc-80be-8c15d9fd2941 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Lock "b5a25e93-8e59-4459-a45e-2d1d2d486bbc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:43:20 compute-0 nova_compute[189491]: 2025-12-01 09:43:20.569 189495 DEBUG oslo_concurrency.lockutils [req-388c8aff-1da8-4ab7-9f63-719479fda9c4 req-a10c785a-c359-48dc-80be-8c15d9fd2941 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Lock "b5a25e93-8e59-4459-a45e-2d1d2d486bbc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:43:20 compute-0 nova_compute[189491]: 2025-12-01 09:43:20.569 189495 DEBUG nova.compute.manager [req-388c8aff-1da8-4ab7-9f63-719479fda9c4 req-a10c785a-c359-48dc-80be-8c15d9fd2941 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: b5a25e93-8e59-4459-a45e-2d1d2d486bbc] No waiting events found dispatching network-vif-plugged-9dc75317-7a9b-4763-9189-4ea68bfc3ccb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 09:43:20 compute-0 nova_compute[189491]: 2025-12-01 09:43:20.569 189495 WARNING nova.compute.manager [req-388c8aff-1da8-4ab7-9f63-719479fda9c4 req-a10c785a-c359-48dc-80be-8c15d9fd2941 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: b5a25e93-8e59-4459-a45e-2d1d2d486bbc] Received unexpected event network-vif-plugged-9dc75317-7a9b-4763-9189-4ea68bfc3ccb for instance with vm_state active and task_state None.#033[00m
Dec  1 09:43:20 compute-0 podman[252496]: 2025-12-01 09:43:20.727534896 +0000 UTC m=+0.085403548 container health_status ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, tcib_managed=true, org.label-schema.name=CentOS Stream 10 Base Image)
Dec  1 09:43:20 compute-0 podman[252495]: 2025-12-01 09:43:20.738449467 +0000 UTC m=+0.089399056 container health_status 6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  1 09:43:21 compute-0 nova_compute[189491]: 2025-12-01 09:43:21.546 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:43:22 compute-0 nova_compute[189491]: 2025-12-01 09:43:22.772 189495 DEBUG nova.compute.manager [req-1825bb3a-4e80-49c6-9925-1aab59ff39c7 req-ba649e9b-7f5d-42cc-af09-fd628090d19d ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: b5a25e93-8e59-4459-a45e-2d1d2d486bbc] Received event network-changed-9dc75317-7a9b-4763-9189-4ea68bfc3ccb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 09:43:22 compute-0 nova_compute[189491]: 2025-12-01 09:43:22.772 189495 DEBUG nova.compute.manager [req-1825bb3a-4e80-49c6-9925-1aab59ff39c7 req-ba649e9b-7f5d-42cc-af09-fd628090d19d ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: b5a25e93-8e59-4459-a45e-2d1d2d486bbc] Refreshing instance network info cache due to event network-changed-9dc75317-7a9b-4763-9189-4ea68bfc3ccb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  1 09:43:22 compute-0 nova_compute[189491]: 2025-12-01 09:43:22.772 189495 DEBUG oslo_concurrency.lockutils [req-1825bb3a-4e80-49c6-9925-1aab59ff39c7 req-ba649e9b-7f5d-42cc-af09-fd628090d19d ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Acquiring lock "refresh_cache-b5a25e93-8e59-4459-a45e-2d1d2d486bbc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 09:43:22 compute-0 nova_compute[189491]: 2025-12-01 09:43:22.772 189495 DEBUG oslo_concurrency.lockutils [req-1825bb3a-4e80-49c6-9925-1aab59ff39c7 req-ba649e9b-7f5d-42cc-af09-fd628090d19d ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Acquired lock "refresh_cache-b5a25e93-8e59-4459-a45e-2d1d2d486bbc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 09:43:22 compute-0 nova_compute[189491]: 2025-12-01 09:43:22.773 189495 DEBUG nova.network.neutron [req-1825bb3a-4e80-49c6-9925-1aab59ff39c7 req-ba649e9b-7f5d-42cc-af09-fd628090d19d ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: b5a25e93-8e59-4459-a45e-2d1d2d486bbc] Refreshing network info cache for port 9dc75317-7a9b-4763-9189-4ea68bfc3ccb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  1 09:43:23 compute-0 nova_compute[189491]: 2025-12-01 09:43:23.096 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:43:24 compute-0 nova_compute[189491]: 2025-12-01 09:43:24.750 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:43:25 compute-0 nova_compute[189491]: 2025-12-01 09:43:25.451 189495 DEBUG nova.network.neutron [req-1825bb3a-4e80-49c6-9925-1aab59ff39c7 req-ba649e9b-7f5d-42cc-af09-fd628090d19d ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: b5a25e93-8e59-4459-a45e-2d1d2d486bbc] Updated VIF entry in instance network info cache for port 9dc75317-7a9b-4763-9189-4ea68bfc3ccb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  1 09:43:25 compute-0 nova_compute[189491]: 2025-12-01 09:43:25.452 189495 DEBUG nova.network.neutron [req-1825bb3a-4e80-49c6-9925-1aab59ff39c7 req-ba649e9b-7f5d-42cc-af09-fd628090d19d ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: b5a25e93-8e59-4459-a45e-2d1d2d486bbc] Updating instance_info_cache with network_info: [{"id": "9dc75317-7a9b-4763-9189-4ea68bfc3ccb", "address": "fa:16:3e:81:32:12", "network": {"id": "528d6fcc-4f6c-4000-b20b-6a6d9f6135ea", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1736415669-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a5fc8e7c1a854418b0a110cc22e69de0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9dc75317-7a", "ovs_interfaceid": "9dc75317-7a9b-4763-9189-4ea68bfc3ccb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 09:43:25 compute-0 nova_compute[189491]: 2025-12-01 09:43:25.474 189495 DEBUG oslo_concurrency.lockutils [req-1825bb3a-4e80-49c6-9925-1aab59ff39c7 req-ba649e9b-7f5d-42cc-af09-fd628090d19d ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Releasing lock "refresh_cache-b5a25e93-8e59-4459-a45e-2d1d2d486bbc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 09:43:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:43:26.534 106659 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:43:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:43:26.535 106659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:43:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:43:26.536 106659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:43:26 compute-0 nova_compute[189491]: 2025-12-01 09:43:26.550 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:43:27 compute-0 nova_compute[189491]: 2025-12-01 09:43:27.412 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:43:27 compute-0 podman[252534]: 2025-12-01 09:43:27.708246278 +0000 UTC m=+0.081519521 container health_status e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  1 09:43:28 compute-0 nova_compute[189491]: 2025-12-01 09:43:28.100 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:43:29 compute-0 podman[252553]: 2025-12-01 09:43:29.693293444 +0000 UTC m=+0.063705767 container health_status dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  1 09:43:29 compute-0 podman[252554]: 2025-12-01 09:43:29.713265661 +0000 UTC m=+0.081076440 container health_status f2d8639519b361376780abb83cb5a5e341c4f412720fb5852b2a4d261a69c359 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, maintainer=Red Hat, Inc., managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., name=ubi9, architecture=x86_64, release=1214.1726694543, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, release-0.7.12=, config_id=edpm, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec  1 09:43:29 compute-0 podman[203700]: time="2025-12-01T09:43:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 09:43:29 compute-0 podman[203700]: @ - - [01/Dec/2025:09:43:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 30758 "" "Go-http-client/1.1"
Dec  1 09:43:29 compute-0 podman[203700]: @ - - [01/Dec/2025:09:43:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5269 "" "Go-http-client/1.1"
Dec  1 09:43:31 compute-0 openstack_network_exporter[205866]: ERROR   09:43:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:43:31 compute-0 openstack_network_exporter[205866]: ERROR   09:43:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:43:31 compute-0 openstack_network_exporter[205866]: ERROR   09:43:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 09:43:31 compute-0 openstack_network_exporter[205866]: ERROR   09:43:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 09:43:31 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:43:31 compute-0 openstack_network_exporter[205866]: ERROR   09:43:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 09:43:31 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:43:31 compute-0 nova_compute[189491]: 2025-12-01 09:43:31.552 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:43:31 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:43:31.623 106659 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:2b:76', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'f6:fe:a3:90:0a:20'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 09:43:31 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:43:31.624 106659 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  1 09:43:31 compute-0 nova_compute[189491]: 2025-12-01 09:43:31.626 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:43:33 compute-0 nova_compute[189491]: 2025-12-01 09:43:33.103 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:43:36 compute-0 nova_compute[189491]: 2025-12-01 09:43:36.554 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:43:36 compute-0 podman[252597]: 2025-12-01 09:43:36.703580643 +0000 UTC m=+0.078829423 container health_status 110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, managed_by=edpm_ansible, container_name=openstack_network_exporter, distribution-scope=public, config_id=edpm, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git)
Dec  1 09:43:36 compute-0 podman[252598]: 2025-12-01 09:43:36.707214544 +0000 UTC m=+0.067708938 container health_status f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec  1 09:43:37 compute-0 nova_compute[189491]: 2025-12-01 09:43:37.251 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:43:37 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:43:37.627 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=203a4433-d8f4-4d80-8084-548a6d57cd5d, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:43:37 compute-0 nova_compute[189491]: 2025-12-01 09:43:37.731 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:43:37 compute-0 nova_compute[189491]: 2025-12-01 09:43:37.731 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 09:43:37 compute-0 nova_compute[189491]: 2025-12-01 09:43:37.731 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 09:43:38 compute-0 nova_compute[189491]: 2025-12-01 09:43:38.106 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:43:38 compute-0 nova_compute[189491]: 2025-12-01 09:43:38.524 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "refresh_cache-38643437-7822-4834-8301-02d3402cad15" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 09:43:38 compute-0 nova_compute[189491]: 2025-12-01 09:43:38.525 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquired lock "refresh_cache-38643437-7822-4834-8301-02d3402cad15" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 09:43:38 compute-0 nova_compute[189491]: 2025-12-01 09:43:38.525 189495 DEBUG nova.network.neutron [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] [instance: 38643437-7822-4834-8301-02d3402cad15] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  1 09:43:38 compute-0 nova_compute[189491]: 2025-12-01 09:43:38.525 189495 DEBUG nova.objects.instance [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 38643437-7822-4834-8301-02d3402cad15 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 09:43:39 compute-0 ovn_controller[97794]: 2025-12-01T09:43:39Z|00087|binding|INFO|Releasing lport 8e3cbcf0-fa9b-4b7e-8d20-6f493c3e3d90 from this chassis (sb_readonly=0)
Dec  1 09:43:39 compute-0 ovn_controller[97794]: 2025-12-01T09:43:39Z|00088|binding|INFO|Releasing lport 043e8190-2d11-42d5-822a-8b7d16589eb2 from this chassis (sb_readonly=0)
Dec  1 09:43:39 compute-0 nova_compute[189491]: 2025-12-01 09:43:39.290 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:43:40 compute-0 podman[252636]: 2025-12-01 09:43:40.714115711 +0000 UTC m=+0.090012832 container health_status 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 09:43:40 compute-0 podman[252637]: 2025-12-01 09:43:40.743450011 +0000 UTC m=+0.111104927 container health_status 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  1 09:43:41 compute-0 nova_compute[189491]: 2025-12-01 09:43:41.213 189495 DEBUG nova.network.neutron [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] [instance: 38643437-7822-4834-8301-02d3402cad15] Updating instance_info_cache with network_info: [{"id": "7d0f49f6-e0e1-44b1-be36-fa4df3220ddb", "address": "fa:16:3e:ac:0b:ad", "network": {"id": "8f64018c-1aea-4b7d-b0f4-4ea18afaaeb0", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-168730074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d7764856ebb94acbaa0b40cbbf09cb3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d0f49f6-e0", "ovs_interfaceid": "7d0f49f6-e0e1-44b1-be36-fa4df3220ddb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 09:43:41 compute-0 nova_compute[189491]: 2025-12-01 09:43:41.230 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Releasing lock "refresh_cache-38643437-7822-4834-8301-02d3402cad15" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 09:43:41 compute-0 nova_compute[189491]: 2025-12-01 09:43:41.230 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] [instance: 38643437-7822-4834-8301-02d3402cad15] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  1 09:43:41 compute-0 nova_compute[189491]: 2025-12-01 09:43:41.556 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:43:42 compute-0 nova_compute[189491]: 2025-12-01 09:43:42.760 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:43:43 compute-0 nova_compute[189491]: 2025-12-01 09:43:43.110 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:43:43 compute-0 nova_compute[189491]: 2025-12-01 09:43:43.713 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:43:44 compute-0 nova_compute[189491]: 2025-12-01 09:43:44.713 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:43:44 compute-0 nova_compute[189491]: 2025-12-01 09:43:44.737 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:43:44 compute-0 nova_compute[189491]: 2025-12-01 09:43:44.737 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:43:44 compute-0 nova_compute[189491]: 2025-12-01 09:43:44.738 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:43:44 compute-0 nova_compute[189491]: 2025-12-01 09:43:44.738 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 09:43:44 compute-0 nova_compute[189491]: 2025-12-01 09:43:44.819 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5a25e93-8e59-4459-a45e-2d1d2d486bbc/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:43:44 compute-0 nova_compute[189491]: 2025-12-01 09:43:44.881 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5a25e93-8e59-4459-a45e-2d1d2d486bbc/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:43:44 compute-0 nova_compute[189491]: 2025-12-01 09:43:44.882 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5a25e93-8e59-4459-a45e-2d1d2d486bbc/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:43:44 compute-0 nova_compute[189491]: 2025-12-01 09:43:44.942 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5a25e93-8e59-4459-a45e-2d1d2d486bbc/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:43:44 compute-0 nova_compute[189491]: 2025-12-01 09:43:44.953 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/38643437-7822-4834-8301-02d3402cad15/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:43:45 compute-0 nova_compute[189491]: 2025-12-01 09:43:45.034 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/38643437-7822-4834-8301-02d3402cad15/disk --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:43:45 compute-0 nova_compute[189491]: 2025-12-01 09:43:45.036 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/38643437-7822-4834-8301-02d3402cad15/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:43:45 compute-0 nova_compute[189491]: 2025-12-01 09:43:45.100 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/38643437-7822-4834-8301-02d3402cad15/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:43:45 compute-0 nova_compute[189491]: 2025-12-01 09:43:45.455 189495 WARNING nova.virt.libvirt.driver [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 09:43:45 compute-0 nova_compute[189491]: 2025-12-01 09:43:45.456 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5019MB free_disk=72.31116104125977GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 09:43:45 compute-0 nova_compute[189491]: 2025-12-01 09:43:45.457 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:43:45 compute-0 nova_compute[189491]: 2025-12-01 09:43:45.457 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:43:45 compute-0 nova_compute[189491]: 2025-12-01 09:43:45.666 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Instance 38643437-7822-4834-8301-02d3402cad15 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 09:43:45 compute-0 nova_compute[189491]: 2025-12-01 09:43:45.667 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Instance b5a25e93-8e59-4459-a45e-2d1d2d486bbc actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 09:43:45 compute-0 nova_compute[189491]: 2025-12-01 09:43:45.668 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 09:43:45 compute-0 nova_compute[189491]: 2025-12-01 09:43:45.668 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 09:43:45 compute-0 nova_compute[189491]: 2025-12-01 09:43:45.855 189495 DEBUG nova.compute.provider_tree [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Inventory has not changed in ProviderTree for provider: 143c7fe7-af1f-477a-978c-6a994d785d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 09:43:45 compute-0 nova_compute[189491]: 2025-12-01 09:43:45.878 189495 DEBUG nova.scheduler.client.report [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Inventory has not changed for provider 143c7fe7-af1f-477a-978c-6a994d785d98 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 09:43:45 compute-0 nova_compute[189491]: 2025-12-01 09:43:45.901 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 09:43:45 compute-0 nova_compute[189491]: 2025-12-01 09:43:45.902 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.444s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:43:46 compute-0 nova_compute[189491]: 2025-12-01 09:43:46.559 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:43:48 compute-0 nova_compute[189491]: 2025-12-01 09:43:48.111 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:43:48 compute-0 nova_compute[189491]: 2025-12-01 09:43:48.903 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:43:48 compute-0 nova_compute[189491]: 2025-12-01 09:43:48.904 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:43:48 compute-0 nova_compute[189491]: 2025-12-01 09:43:48.904 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:43:48 compute-0 nova_compute[189491]: 2025-12-01 09:43:48.905 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 09:43:49 compute-0 nova_compute[189491]: 2025-12-01 09:43:49.710 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:43:50 compute-0 nova_compute[189491]: 2025-12-01 09:43:50.076 189495 DEBUG nova.objects.instance [None req-86a660f5-6ff3-419c-871b-36f79b25b3da 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] Lazy-loading 'flavor' on Instance uuid 38643437-7822-4834-8301-02d3402cad15 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 09:43:50 compute-0 nova_compute[189491]: 2025-12-01 09:43:50.134 189495 DEBUG oslo_concurrency.lockutils [None req-86a660f5-6ff3-419c-871b-36f79b25b3da 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] Acquiring lock "refresh_cache-38643437-7822-4834-8301-02d3402cad15" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 09:43:50 compute-0 nova_compute[189491]: 2025-12-01 09:43:50.135 189495 DEBUG oslo_concurrency.lockutils [None req-86a660f5-6ff3-419c-871b-36f79b25b3da 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] Acquired lock "refresh_cache-38643437-7822-4834-8301-02d3402cad15" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 09:43:50 compute-0 nova_compute[189491]: 2025-12-01 09:43:50.284 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:43:50 compute-0 nova_compute[189491]: 2025-12-01 09:43:50.713 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:43:50 compute-0 nova_compute[189491]: 2025-12-01 09:43:50.875 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:43:51 compute-0 nova_compute[189491]: 2025-12-01 09:43:51.564 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:43:51 compute-0 podman[252694]: 2025-12-01 09:43:51.714351386 +0000 UTC m=+0.079479510 container health_status 6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  1 09:43:51 compute-0 podman[252695]: 2025-12-01 09:43:51.718140981 +0000 UTC m=+0.082346271 container health_status ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  1 09:43:52 compute-0 nova_compute[189491]: 2025-12-01 09:43:52.175 189495 DEBUG nova.network.neutron [None req-86a660f5-6ff3-419c-871b-36f79b25b3da 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] [instance: 38643437-7822-4834-8301-02d3402cad15] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  1 09:43:52 compute-0 nova_compute[189491]: 2025-12-01 09:43:52.324 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:43:52 compute-0 nova_compute[189491]: 2025-12-01 09:43:52.350 189495 DEBUG nova.compute.manager [req-0c9c240f-99e0-4225-985b-7bb81852db20 req-8bed7332-fb59-4f55-a82e-cf09a8d28e14 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 38643437-7822-4834-8301-02d3402cad15] Received event network-changed-7d0f49f6-e0e1-44b1-be36-fa4df3220ddb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 09:43:52 compute-0 nova_compute[189491]: 2025-12-01 09:43:52.351 189495 DEBUG nova.compute.manager [req-0c9c240f-99e0-4225-985b-7bb81852db20 req-8bed7332-fb59-4f55-a82e-cf09a8d28e14 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 38643437-7822-4834-8301-02d3402cad15] Refreshing instance network info cache due to event network-changed-7d0f49f6-e0e1-44b1-be36-fa4df3220ddb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  1 09:43:52 compute-0 nova_compute[189491]: 2025-12-01 09:43:52.351 189495 DEBUG oslo_concurrency.lockutils [req-0c9c240f-99e0-4225-985b-7bb81852db20 req-8bed7332-fb59-4f55-a82e-cf09a8d28e14 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Acquiring lock "refresh_cache-38643437-7822-4834-8301-02d3402cad15" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 09:43:52 compute-0 nova_compute[189491]: 2025-12-01 09:43:52.715 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:43:53 compute-0 nova_compute[189491]: 2025-12-01 09:43:53.114 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:43:54 compute-0 nova_compute[189491]: 2025-12-01 09:43:54.587 189495 DEBUG nova.network.neutron [None req-86a660f5-6ff3-419c-871b-36f79b25b3da 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] [instance: 38643437-7822-4834-8301-02d3402cad15] Updating instance_info_cache with network_info: [{"id": "7d0f49f6-e0e1-44b1-be36-fa4df3220ddb", "address": "fa:16:3e:ac:0b:ad", "network": {"id": "8f64018c-1aea-4b7d-b0f4-4ea18afaaeb0", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-168730074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}, {"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d7764856ebb94acbaa0b40cbbf09cb3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d0f49f6-e0", "ovs_interfaceid": "7d0f49f6-e0e1-44b1-be36-fa4df3220ddb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 09:43:54 compute-0 nova_compute[189491]: 2025-12-01 09:43:54.709 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:43:54 compute-0 nova_compute[189491]: 2025-12-01 09:43:54.823 189495 DEBUG oslo_concurrency.lockutils [None req-86a660f5-6ff3-419c-871b-36f79b25b3da 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] Releasing lock "refresh_cache-38643437-7822-4834-8301-02d3402cad15" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 09:43:54 compute-0 nova_compute[189491]: 2025-12-01 09:43:54.824 189495 DEBUG nova.compute.manager [None req-86a660f5-6ff3-419c-871b-36f79b25b3da 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] [instance: 38643437-7822-4834-8301-02d3402cad15] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144#033[00m
Dec  1 09:43:54 compute-0 nova_compute[189491]: 2025-12-01 09:43:54.824 189495 DEBUG nova.compute.manager [None req-86a660f5-6ff3-419c-871b-36f79b25b3da 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] [instance: 38643437-7822-4834-8301-02d3402cad15] network_info to inject: |[{"id": "7d0f49f6-e0e1-44b1-be36-fa4df3220ddb", "address": "fa:16:3e:ac:0b:ad", "network": {"id": "8f64018c-1aea-4b7d-b0f4-4ea18afaaeb0", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-168730074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}, {"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d7764856ebb94acbaa0b40cbbf09cb3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d0f49f6-e0", "ovs_interfaceid": "7d0f49f6-e0e1-44b1-be36-fa4df3220ddb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145#033[00m
Dec  1 09:43:54 compute-0 nova_compute[189491]: 2025-12-01 09:43:54.827 189495 DEBUG oslo_concurrency.lockutils [req-0c9c240f-99e0-4225-985b-7bb81852db20 req-8bed7332-fb59-4f55-a82e-cf09a8d28e14 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Acquired lock "refresh_cache-38643437-7822-4834-8301-02d3402cad15" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 09:43:54 compute-0 nova_compute[189491]: 2025-12-01 09:43:54.828 189495 DEBUG nova.network.neutron [req-0c9c240f-99e0-4225-985b-7bb81852db20 req-8bed7332-fb59-4f55-a82e-cf09a8d28e14 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 38643437-7822-4834-8301-02d3402cad15] Refreshing network info cache for port 7d0f49f6-e0e1-44b1-be36-fa4df3220ddb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  1 09:43:55 compute-0 ovn_controller[97794]: 2025-12-01T09:43:55Z|00014|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:81:32:12 10.100.0.14
Dec  1 09:43:55 compute-0 ovn_controller[97794]: 2025-12-01T09:43:55Z|00015|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:81:32:12 10.100.0.14
Dec  1 09:43:55 compute-0 nova_compute[189491]: 2025-12-01 09:43:55.969 189495 DEBUG nova.objects.instance [None req-1b195172-64af-42c2-9cd5-772a2e2f5eee 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] Lazy-loading 'flavor' on Instance uuid 38643437-7822-4834-8301-02d3402cad15 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 09:43:56 compute-0 nova_compute[189491]: 2025-12-01 09:43:56.013 189495 DEBUG oslo_concurrency.lockutils [None req-1b195172-64af-42c2-9cd5-772a2e2f5eee 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] Acquiring lock "refresh_cache-38643437-7822-4834-8301-02d3402cad15" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 09:43:56 compute-0 nova_compute[189491]: 2025-12-01 09:43:56.376 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:43:56 compute-0 nova_compute[189491]: 2025-12-01 09:43:56.566 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:43:56 compute-0 nova_compute[189491]: 2025-12-01 09:43:56.946 189495 DEBUG nova.network.neutron [req-0c9c240f-99e0-4225-985b-7bb81852db20 req-8bed7332-fb59-4f55-a82e-cf09a8d28e14 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 38643437-7822-4834-8301-02d3402cad15] Updated VIF entry in instance network info cache for port 7d0f49f6-e0e1-44b1-be36-fa4df3220ddb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  1 09:43:56 compute-0 nova_compute[189491]: 2025-12-01 09:43:56.947 189495 DEBUG nova.network.neutron [req-0c9c240f-99e0-4225-985b-7bb81852db20 req-8bed7332-fb59-4f55-a82e-cf09a8d28e14 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 38643437-7822-4834-8301-02d3402cad15] Updating instance_info_cache with network_info: [{"id": "7d0f49f6-e0e1-44b1-be36-fa4df3220ddb", "address": "fa:16:3e:ac:0b:ad", "network": {"id": "8f64018c-1aea-4b7d-b0f4-4ea18afaaeb0", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-168730074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}, {"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d7764856ebb94acbaa0b40cbbf09cb3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d0f49f6-e0", "ovs_interfaceid": "7d0f49f6-e0e1-44b1-be36-fa4df3220ddb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 09:43:56 compute-0 nova_compute[189491]: 2025-12-01 09:43:56.963 189495 DEBUG oslo_concurrency.lockutils [req-0c9c240f-99e0-4225-985b-7bb81852db20 req-8bed7332-fb59-4f55-a82e-cf09a8d28e14 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Releasing lock "refresh_cache-38643437-7822-4834-8301-02d3402cad15" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 09:43:56 compute-0 nova_compute[189491]: 2025-12-01 09:43:56.964 189495 DEBUG oslo_concurrency.lockutils [None req-1b195172-64af-42c2-9cd5-772a2e2f5eee 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] Acquired lock "refresh_cache-38643437-7822-4834-8301-02d3402cad15" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 09:43:58 compute-0 nova_compute[189491]: 2025-12-01 09:43:58.117 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:43:58 compute-0 nova_compute[189491]: 2025-12-01 09:43:58.133 189495 DEBUG nova.network.neutron [None req-1b195172-64af-42c2-9cd5-772a2e2f5eee 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] [instance: 38643437-7822-4834-8301-02d3402cad15] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  1 09:43:58 compute-0 nova_compute[189491]: 2025-12-01 09:43:58.268 189495 DEBUG nova.compute.manager [req-af24ecd0-ba93-4060-8afe-005e9726bfc0 req-652c45c3-6648-4887-b135-06e61a4ff54c ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 38643437-7822-4834-8301-02d3402cad15] Received event network-changed-7d0f49f6-e0e1-44b1-be36-fa4df3220ddb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 09:43:58 compute-0 nova_compute[189491]: 2025-12-01 09:43:58.269 189495 DEBUG nova.compute.manager [req-af24ecd0-ba93-4060-8afe-005e9726bfc0 req-652c45c3-6648-4887-b135-06e61a4ff54c ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 38643437-7822-4834-8301-02d3402cad15] Refreshing instance network info cache due to event network-changed-7d0f49f6-e0e1-44b1-be36-fa4df3220ddb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  1 09:43:58 compute-0 nova_compute[189491]: 2025-12-01 09:43:58.269 189495 DEBUG oslo_concurrency.lockutils [req-af24ecd0-ba93-4060-8afe-005e9726bfc0 req-652c45c3-6648-4887-b135-06e61a4ff54c ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Acquiring lock "refresh_cache-38643437-7822-4834-8301-02d3402cad15" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 09:43:58 compute-0 podman[252748]: 2025-12-01 09:43:58.732141182 +0000 UTC m=+0.100599966 container health_status e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 09:43:59 compute-0 nova_compute[189491]: 2025-12-01 09:43:59.106 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:43:59 compute-0 nova_compute[189491]: 2025-12-01 09:43:59.543 189495 DEBUG nova.network.neutron [None req-1b195172-64af-42c2-9cd5-772a2e2f5eee 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] [instance: 38643437-7822-4834-8301-02d3402cad15] Updating instance_info_cache with network_info: [{"id": "7d0f49f6-e0e1-44b1-be36-fa4df3220ddb", "address": "fa:16:3e:ac:0b:ad", "network": {"id": "8f64018c-1aea-4b7d-b0f4-4ea18afaaeb0", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-168730074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d7764856ebb94acbaa0b40cbbf09cb3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d0f49f6-e0", "ovs_interfaceid": "7d0f49f6-e0e1-44b1-be36-fa4df3220ddb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 09:43:59 compute-0 nova_compute[189491]: 2025-12-01 09:43:59.565 189495 DEBUG oslo_concurrency.lockutils [None req-1b195172-64af-42c2-9cd5-772a2e2f5eee 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] Releasing lock "refresh_cache-38643437-7822-4834-8301-02d3402cad15" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 09:43:59 compute-0 nova_compute[189491]: 2025-12-01 09:43:59.566 189495 DEBUG nova.compute.manager [None req-1b195172-64af-42c2-9cd5-772a2e2f5eee 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] [instance: 38643437-7822-4834-8301-02d3402cad15] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144#033[00m
Dec  1 09:43:59 compute-0 nova_compute[189491]: 2025-12-01 09:43:59.566 189495 DEBUG nova.compute.manager [None req-1b195172-64af-42c2-9cd5-772a2e2f5eee 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] [instance: 38643437-7822-4834-8301-02d3402cad15] network_info to inject: |[{"id": "7d0f49f6-e0e1-44b1-be36-fa4df3220ddb", "address": "fa:16:3e:ac:0b:ad", "network": {"id": "8f64018c-1aea-4b7d-b0f4-4ea18afaaeb0", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-168730074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d7764856ebb94acbaa0b40cbbf09cb3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d0f49f6-e0", "ovs_interfaceid": "7d0f49f6-e0e1-44b1-be36-fa4df3220ddb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145#033[00m
Dec  1 09:43:59 compute-0 nova_compute[189491]: 2025-12-01 09:43:59.568 189495 DEBUG oslo_concurrency.lockutils [req-af24ecd0-ba93-4060-8afe-005e9726bfc0 req-652c45c3-6648-4887-b135-06e61a4ff54c ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Acquired lock "refresh_cache-38643437-7822-4834-8301-02d3402cad15" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 09:43:59 compute-0 nova_compute[189491]: 2025-12-01 09:43:59.568 189495 DEBUG nova.network.neutron [req-af24ecd0-ba93-4060-8afe-005e9726bfc0 req-652c45c3-6648-4887-b135-06e61a4ff54c ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 38643437-7822-4834-8301-02d3402cad15] Refreshing network info cache for port 7d0f49f6-e0e1-44b1-be36-fa4df3220ddb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  1 09:43:59 compute-0 podman[203700]: time="2025-12-01T09:43:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 09:43:59 compute-0 podman[203700]: @ - - [01/Dec/2025:09:43:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 30758 "" "Go-http-client/1.1"
Dec  1 09:43:59 compute-0 podman[203700]: @ - - [01/Dec/2025:09:43:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5270 "" "Go-http-client/1.1"
Dec  1 09:44:00 compute-0 podman[252769]: 2025-12-01 09:44:00.742373835 +0000 UTC m=+0.094462834 container health_status dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  1 09:44:00 compute-0 podman[252770]: 2025-12-01 09:44:00.760718522 +0000 UTC m=+0.115677382 container health_status f2d8639519b361376780abb83cb5a5e341c4f412720fb5852b2a4d261a69c359 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, release-0.7.12=, config_id=edpm, container_name=kepler, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.expose-services=, io.buildah.version=1.29.0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., architecture=x86_64, summary=Provides the latest release of Red Hat Universal Base Image 9., managed_by=edpm_ansible, vendor=Red Hat, Inc., version=9.4, build-date=2024-09-18T21:23:30)
Dec  1 09:44:00 compute-0 nova_compute[189491]: 2025-12-01 09:44:00.816 189495 DEBUG oslo_concurrency.lockutils [None req-b338cc88-ba57-4677-970d-e2860ad08ca2 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] Acquiring lock "38643437-7822-4834-8301-02d3402cad15" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:44:00 compute-0 nova_compute[189491]: 2025-12-01 09:44:00.817 189495 DEBUG oslo_concurrency.lockutils [None req-b338cc88-ba57-4677-970d-e2860ad08ca2 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] Lock "38643437-7822-4834-8301-02d3402cad15" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:44:00 compute-0 nova_compute[189491]: 2025-12-01 09:44:00.817 189495 DEBUG oslo_concurrency.lockutils [None req-b338cc88-ba57-4677-970d-e2860ad08ca2 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] Acquiring lock "38643437-7822-4834-8301-02d3402cad15-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:44:00 compute-0 nova_compute[189491]: 2025-12-01 09:44:00.818 189495 DEBUG oslo_concurrency.lockutils [None req-b338cc88-ba57-4677-970d-e2860ad08ca2 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] Lock "38643437-7822-4834-8301-02d3402cad15-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:44:00 compute-0 nova_compute[189491]: 2025-12-01 09:44:00.818 189495 DEBUG oslo_concurrency.lockutils [None req-b338cc88-ba57-4677-970d-e2860ad08ca2 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] Lock "38643437-7822-4834-8301-02d3402cad15-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:44:00 compute-0 nova_compute[189491]: 2025-12-01 09:44:00.820 189495 INFO nova.compute.manager [None req-b338cc88-ba57-4677-970d-e2860ad08ca2 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] [instance: 38643437-7822-4834-8301-02d3402cad15] Terminating instance#033[00m
Dec  1 09:44:00 compute-0 nova_compute[189491]: 2025-12-01 09:44:00.821 189495 DEBUG nova.compute.manager [None req-b338cc88-ba57-4677-970d-e2860ad08ca2 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] [instance: 38643437-7822-4834-8301-02d3402cad15] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  1 09:44:00 compute-0 kernel: tap7d0f49f6-e0 (unregistering): left promiscuous mode
Dec  1 09:44:00 compute-0 NetworkManager[56318]: <info>  [1764582240.8548] device (tap7d0f49f6-e0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  1 09:44:00 compute-0 ovn_controller[97794]: 2025-12-01T09:44:00Z|00089|binding|INFO|Releasing lport 7d0f49f6-e0e1-44b1-be36-fa4df3220ddb from this chassis (sb_readonly=0)
Dec  1 09:44:00 compute-0 ovn_controller[97794]: 2025-12-01T09:44:00Z|00090|binding|INFO|Setting lport 7d0f49f6-e0e1-44b1-be36-fa4df3220ddb down in Southbound
Dec  1 09:44:00 compute-0 nova_compute[189491]: 2025-12-01 09:44:00.869 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:44:00 compute-0 ovn_controller[97794]: 2025-12-01T09:44:00Z|00091|binding|INFO|Removing iface tap7d0f49f6-e0 ovn-installed in OVS
Dec  1 09:44:00 compute-0 nova_compute[189491]: 2025-12-01 09:44:00.873 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:44:00 compute-0 nova_compute[189491]: 2025-12-01 09:44:00.883 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:44:00 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:44:00.905 106659 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ac:0b:ad 10.100.0.9'], port_security=['fa:16:3e:ac:0b:ad 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '38643437-7822-4834-8301-02d3402cad15', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8f64018c-1aea-4b7d-b0f4-4ea18afaaeb0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd7764856ebb94acbaa0b40cbbf09cb3d', 'neutron:revision_number': '6', 'neutron:security_group_ids': '956cfa36-e252-4c20-b19a-437aef36f7e1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.200'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=63038633-add0-4830-ba46-d2e62ec7d35b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff7f12c3670>], logical_port=7d0f49f6-e0e1-44b1-be36-fa4df3220ddb) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff7f12c3670>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 09:44:00 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:44:00.906 106659 INFO neutron.agent.ovn.metadata.agent [-] Port 7d0f49f6-e0e1-44b1-be36-fa4df3220ddb in datapath 8f64018c-1aea-4b7d-b0f4-4ea18afaaeb0 unbound from our chassis#033[00m
Dec  1 09:44:00 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:44:00.907 106659 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8f64018c-1aea-4b7d-b0f4-4ea18afaaeb0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  1 09:44:00 compute-0 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000006.scope: Deactivated successfully.
Dec  1 09:44:00 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:44:00.909 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[da8ebcbf-2ff1-4a7f-98d3-1c586f86c254]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:44:00 compute-0 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000006.scope: Consumed 42.736s CPU time.
Dec  1 09:44:00 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:44:00.910 106659 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8f64018c-1aea-4b7d-b0f4-4ea18afaaeb0 namespace which is not needed anymore#033[00m
Dec  1 09:44:00 compute-0 systemd-machined[155812]: Machine qemu-6-instance-00000006 terminated.
Dec  1 09:44:01 compute-0 nova_compute[189491]: 2025-12-01 09:44:01.056 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:44:01 compute-0 nova_compute[189491]: 2025-12-01 09:44:01.062 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:44:01 compute-0 neutron-haproxy-ovnmeta-8f64018c-1aea-4b7d-b0f4-4ea18afaaeb0[251984]: [NOTICE]   (251988) : haproxy version is 2.8.14-c23fe91
Dec  1 09:44:01 compute-0 neutron-haproxy-ovnmeta-8f64018c-1aea-4b7d-b0f4-4ea18afaaeb0[251984]: [NOTICE]   (251988) : path to executable is /usr/sbin/haproxy
Dec  1 09:44:01 compute-0 neutron-haproxy-ovnmeta-8f64018c-1aea-4b7d-b0f4-4ea18afaaeb0[251984]: [WARNING]  (251988) : Exiting Master process...
Dec  1 09:44:01 compute-0 neutron-haproxy-ovnmeta-8f64018c-1aea-4b7d-b0f4-4ea18afaaeb0[251984]: [WARNING]  (251988) : Exiting Master process...
Dec  1 09:44:01 compute-0 neutron-haproxy-ovnmeta-8f64018c-1aea-4b7d-b0f4-4ea18afaaeb0[251984]: [ALERT]    (251988) : Current worker (251990) exited with code 143 (Terminated)
Dec  1 09:44:01 compute-0 neutron-haproxy-ovnmeta-8f64018c-1aea-4b7d-b0f4-4ea18afaaeb0[251984]: [WARNING]  (251988) : All workers exited. Exiting... (0)
Dec  1 09:44:01 compute-0 systemd[1]: libpod-7d5a12f7b100c0cea26b452555a210e0aaa0545797eb00e2e6f29180ad1eaa48.scope: Deactivated successfully.
Dec  1 09:44:01 compute-0 podman[252834]: 2025-12-01 09:44:01.100777489 +0000 UTC m=+0.073006099 container died 7d5a12f7b100c0cea26b452555a210e0aaa0545797eb00e2e6f29180ad1eaa48 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8f64018c-1aea-4b7d-b0f4-4ea18afaaeb0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec  1 09:44:01 compute-0 nova_compute[189491]: 2025-12-01 09:44:01.101 189495 INFO nova.virt.libvirt.driver [-] [instance: 38643437-7822-4834-8301-02d3402cad15] Instance destroyed successfully.#033[00m
Dec  1 09:44:01 compute-0 nova_compute[189491]: 2025-12-01 09:44:01.102 189495 DEBUG nova.objects.instance [None req-b338cc88-ba57-4677-970d-e2860ad08ca2 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] Lazy-loading 'resources' on Instance uuid 38643437-7822-4834-8301-02d3402cad15 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 09:44:01 compute-0 nova_compute[189491]: 2025-12-01 09:44:01.106 189495 DEBUG nova.network.neutron [req-af24ecd0-ba93-4060-8afe-005e9726bfc0 req-652c45c3-6648-4887-b135-06e61a4ff54c ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 38643437-7822-4834-8301-02d3402cad15] Updated VIF entry in instance network info cache for port 7d0f49f6-e0e1-44b1-be36-fa4df3220ddb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  1 09:44:01 compute-0 nova_compute[189491]: 2025-12-01 09:44:01.106 189495 DEBUG nova.network.neutron [req-af24ecd0-ba93-4060-8afe-005e9726bfc0 req-652c45c3-6648-4887-b135-06e61a4ff54c ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 38643437-7822-4834-8301-02d3402cad15] Updating instance_info_cache with network_info: [{"id": "7d0f49f6-e0e1-44b1-be36-fa4df3220ddb", "address": "fa:16:3e:ac:0b:ad", "network": {"id": "8f64018c-1aea-4b7d-b0f4-4ea18afaaeb0", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-168730074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d7764856ebb94acbaa0b40cbbf09cb3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d0f49f6-e0", "ovs_interfaceid": "7d0f49f6-e0e1-44b1-be36-fa4df3220ddb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 09:44:01 compute-0 nova_compute[189491]: 2025-12-01 09:44:01.192 189495 DEBUG nova.virt.libvirt.vif [None req-b338cc88-ba57-4677-970d-e2860ad08ca2 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-01T09:42:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-1816623560',display_name='tempest-AttachInterfacesUnderV243Test-server-1816623560',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-1816623560',id=6,image_ref='7ddeffd1-d06f-4a46-9e41-114974daa90e',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIDov5y9OwhjjL8WPI8jxvuxbUQv67pnksuH/4lF8J8r1S9hI5ZeobpiFpyHKcxVVEV1lVkVZ97drOsKr7ctk5ApG1BaxbqF45NStb7lJLgZLvMHh2SYNMaXiiNfpkaIOQ==',key_name='tempest-keypair-1935929616',keypairs=<?>,launch_index=0,launched_at=2025-12-01T09:42:39Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d7764856ebb94acbaa0b40cbbf09cb3d',ramdisk_id='',reservation_id='r-uw59cg9l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7ddeffd1-d06f-4a46-9e41-114974daa90e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesUnderV243Test-820336300',owner_user_name='tempest-AttachInterfacesUnderV243Test-820336300-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-01T09:43:59Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='688e0c65604244fb9d423018bc88d238',uuid=38643437-7822-4834-8301-02d3402cad15,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7d0f49f6-e0e1-44b1-be36-fa4df3220ddb", "address": "fa:16:3e:ac:0b:ad", "network": {"id": "8f64018c-1aea-4b7d-b0f4-4ea18afaaeb0", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-168730074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d7764856ebb94acbaa0b40cbbf09cb3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d0f49f6-e0", "ovs_interfaceid": "7d0f49f6-e0e1-44b1-be36-fa4df3220ddb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  1 09:44:01 compute-0 nova_compute[189491]: 2025-12-01 09:44:01.194 189495 DEBUG nova.network.os_vif_util [None req-b338cc88-ba57-4677-970d-e2860ad08ca2 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] Converting VIF {"id": "7d0f49f6-e0e1-44b1-be36-fa4df3220ddb", "address": "fa:16:3e:ac:0b:ad", "network": {"id": "8f64018c-1aea-4b7d-b0f4-4ea18afaaeb0", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-168730074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d7764856ebb94acbaa0b40cbbf09cb3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d0f49f6-e0", "ovs_interfaceid": "7d0f49f6-e0e1-44b1-be36-fa4df3220ddb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 09:44:01 compute-0 nova_compute[189491]: 2025-12-01 09:44:01.195 189495 DEBUG nova.network.os_vif_util [None req-b338cc88-ba57-4677-970d-e2860ad08ca2 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ac:0b:ad,bridge_name='br-int',has_traffic_filtering=True,id=7d0f49f6-e0e1-44b1-be36-fa4df3220ddb,network=Network(8f64018c-1aea-4b7d-b0f4-4ea18afaaeb0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7d0f49f6-e0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 09:44:01 compute-0 nova_compute[189491]: 2025-12-01 09:44:01.195 189495 DEBUG os_vif [None req-b338cc88-ba57-4677-970d-e2860ad08ca2 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ac:0b:ad,bridge_name='br-int',has_traffic_filtering=True,id=7d0f49f6-e0e1-44b1-be36-fa4df3220ddb,network=Network(8f64018c-1aea-4b7d-b0f4-4ea18afaaeb0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7d0f49f6-e0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  1 09:44:01 compute-0 nova_compute[189491]: 2025-12-01 09:44:01.199 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:44:01 compute-0 nova_compute[189491]: 2025-12-01 09:44:01.199 189495 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7d0f49f6-e0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:44:01 compute-0 nova_compute[189491]: 2025-12-01 09:44:01.202 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:44:01 compute-0 ovn_controller[97794]: 2025-12-01T09:44:01Z|00092|binding|INFO|Releasing lport 8e3cbcf0-fa9b-4b7e-8d20-6f493c3e3d90 from this chassis (sb_readonly=0)
Dec  1 09:44:01 compute-0 ovn_controller[97794]: 2025-12-01T09:44:01Z|00093|binding|INFO|Releasing lport 043e8190-2d11-42d5-822a-8b7d16589eb2 from this chassis (sb_readonly=0)
Dec  1 09:44:01 compute-0 nova_compute[189491]: 2025-12-01 09:44:01.207 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:44:01 compute-0 nova_compute[189491]: 2025-12-01 09:44:01.208 189495 DEBUG oslo_concurrency.lockutils [req-af24ecd0-ba93-4060-8afe-005e9726bfc0 req-652c45c3-6648-4887-b135-06e61a4ff54c ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Releasing lock "refresh_cache-38643437-7822-4834-8301-02d3402cad15" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 09:44:01 compute-0 nova_compute[189491]: 2025-12-01 09:44:01.213 189495 INFO os_vif [None req-b338cc88-ba57-4677-970d-e2860ad08ca2 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ac:0b:ad,bridge_name='br-int',has_traffic_filtering=True,id=7d0f49f6-e0e1-44b1-be36-fa4df3220ddb,network=Network(8f64018c-1aea-4b7d-b0f4-4ea18afaaeb0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7d0f49f6-e0')#033[00m
Dec  1 09:44:01 compute-0 nova_compute[189491]: 2025-12-01 09:44:01.214 189495 INFO nova.virt.libvirt.driver [None req-b338cc88-ba57-4677-970d-e2860ad08ca2 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] [instance: 38643437-7822-4834-8301-02d3402cad15] Deleting instance files /var/lib/nova/instances/38643437-7822-4834-8301-02d3402cad15_del#033[00m
Dec  1 09:44:01 compute-0 nova_compute[189491]: 2025-12-01 09:44:01.216 189495 INFO nova.virt.libvirt.driver [None req-b338cc88-ba57-4677-970d-e2860ad08ca2 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] [instance: 38643437-7822-4834-8301-02d3402cad15] Deletion of /var/lib/nova/instances/38643437-7822-4834-8301-02d3402cad15_del complete#033[00m
Dec  1 09:44:01 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7d5a12f7b100c0cea26b452555a210e0aaa0545797eb00e2e6f29180ad1eaa48-userdata-shm.mount: Deactivated successfully.
Dec  1 09:44:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-a63e21dff89f098e963c41c67239ba6583cc5294985bc66a33761e56972db1b2-merged.mount: Deactivated successfully.
Dec  1 09:44:01 compute-0 podman[252834]: 2025-12-01 09:44:01.27711271 +0000 UTC m=+0.249341320 container cleanup 7d5a12f7b100c0cea26b452555a210e0aaa0545797eb00e2e6f29180ad1eaa48 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8f64018c-1aea-4b7d-b0f4-4ea18afaaeb0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec  1 09:44:01 compute-0 nova_compute[189491]: 2025-12-01 09:44:01.290 189495 INFO nova.compute.manager [None req-b338cc88-ba57-4677-970d-e2860ad08ca2 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] [instance: 38643437-7822-4834-8301-02d3402cad15] Took 0.47 seconds to destroy the instance on the hypervisor.#033[00m
Dec  1 09:44:01 compute-0 nova_compute[189491]: 2025-12-01 09:44:01.291 189495 DEBUG oslo.service.loopingcall [None req-b338cc88-ba57-4677-970d-e2860ad08ca2 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  1 09:44:01 compute-0 nova_compute[189491]: 2025-12-01 09:44:01.292 189495 DEBUG nova.compute.manager [-] [instance: 38643437-7822-4834-8301-02d3402cad15] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  1 09:44:01 compute-0 nova_compute[189491]: 2025-12-01 09:44:01.292 189495 DEBUG nova.network.neutron [-] [instance: 38643437-7822-4834-8301-02d3402cad15] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  1 09:44:01 compute-0 systemd[1]: libpod-conmon-7d5a12f7b100c0cea26b452555a210e0aaa0545797eb00e2e6f29180ad1eaa48.scope: Deactivated successfully.
Dec  1 09:44:01 compute-0 nova_compute[189491]: 2025-12-01 09:44:01.300 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:44:01 compute-0 podman[252878]: 2025-12-01 09:44:01.389780614 +0000 UTC m=+0.081734005 container remove 7d5a12f7b100c0cea26b452555a210e0aaa0545797eb00e2e6f29180ad1eaa48 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8f64018c-1aea-4b7d-b0f4-4ea18afaaeb0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec  1 09:44:01 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:44:01.398 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[d6bd7cbf-35f3-4899-86c5-790fc6ba124d]: (4, ('Mon Dec  1 09:44:01 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-8f64018c-1aea-4b7d-b0f4-4ea18afaaeb0 (7d5a12f7b100c0cea26b452555a210e0aaa0545797eb00e2e6f29180ad1eaa48)\n7d5a12f7b100c0cea26b452555a210e0aaa0545797eb00e2e6f29180ad1eaa48\nMon Dec  1 09:44:01 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-8f64018c-1aea-4b7d-b0f4-4ea18afaaeb0 (7d5a12f7b100c0cea26b452555a210e0aaa0545797eb00e2e6f29180ad1eaa48)\n7d5a12f7b100c0cea26b452555a210e0aaa0545797eb00e2e6f29180ad1eaa48\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:44:01 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:44:01.400 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[89020cc9-ca7d-45ce-9c8b-eac37b757412]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:44:01 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:44:01.402 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8f64018c-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:44:01 compute-0 nova_compute[189491]: 2025-12-01 09:44:01.405 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:44:01 compute-0 kernel: tap8f64018c-10: left promiscuous mode
Dec  1 09:44:01 compute-0 nova_compute[189491]: 2025-12-01 09:44:01.409 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:44:01 compute-0 openstack_network_exporter[205866]: ERROR   09:44:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 09:44:01 compute-0 openstack_network_exporter[205866]: ERROR   09:44:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:44:01 compute-0 openstack_network_exporter[205866]: ERROR   09:44:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:44:01 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:44:01.412 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[4e9184a6-0e21-48c2-a001-38c4288e5907]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:44:01 compute-0 openstack_network_exporter[205866]: ERROR   09:44:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 09:44:01 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:44:01 compute-0 openstack_network_exporter[205866]: ERROR   09:44:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 09:44:01 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:44:01 compute-0 nova_compute[189491]: 2025-12-01 09:44:01.423 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:44:01 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:44:01.437 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[169645d2-9174-4a52-b785-0d6ade13e0c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:44:01 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:44:01.439 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[fde18fc3-72a1-4cf1-93c2-287294986881]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:44:01 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:44:01.462 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[4ed166f2-87eb-477e-9163-98f3591c65bb]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 539831, 'reachable_time': 38068, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 252892, 'error': None, 'target': 'ovnmeta-8f64018c-1aea-4b7d-b0f4-4ea18afaaeb0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:44:01 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:44:01.467 106797 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8f64018c-1aea-4b7d-b0f4-4ea18afaaeb0 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  1 09:44:01 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:44:01.467 106797 DEBUG oslo.privsep.daemon [-] privsep: reply[d2f8ff36-8c55-4576-b134-e4c3708513b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:44:01 compute-0 systemd[1]: run-netns-ovnmeta\x2d8f64018c\x2d1aea\x2d4b7d\x2db0f4\x2d4ea18afaaeb0.mount: Deactivated successfully.
Dec  1 09:44:02 compute-0 nova_compute[189491]: 2025-12-01 09:44:02.180 189495 DEBUG nova.network.neutron [-] [instance: 38643437-7822-4834-8301-02d3402cad15] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 09:44:02 compute-0 nova_compute[189491]: 2025-12-01 09:44:02.239 189495 INFO nova.compute.manager [-] [instance: 38643437-7822-4834-8301-02d3402cad15] Took 0.95 seconds to deallocate network for instance.#033[00m
Dec  1 09:44:02 compute-0 nova_compute[189491]: 2025-12-01 09:44:02.433 189495 DEBUG oslo_concurrency.lockutils [None req-b338cc88-ba57-4677-970d-e2860ad08ca2 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:44:02 compute-0 nova_compute[189491]: 2025-12-01 09:44:02.434 189495 DEBUG oslo_concurrency.lockutils [None req-b338cc88-ba57-4677-970d-e2860ad08ca2 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:44:02 compute-0 nova_compute[189491]: 2025-12-01 09:44:02.530 189495 DEBUG nova.compute.provider_tree [None req-b338cc88-ba57-4677-970d-e2860ad08ca2 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] Inventory has not changed in ProviderTree for provider: 143c7fe7-af1f-477a-978c-6a994d785d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 09:44:02 compute-0 nova_compute[189491]: 2025-12-01 09:44:02.637 189495 DEBUG nova.compute.manager [req-dfa01ded-192c-46c9-b35b-a9ac9f5cfab7 req-afa16826-4d1f-4f65-ba14-e8b3f7b17096 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 38643437-7822-4834-8301-02d3402cad15] Received event network-vif-deleted-7d0f49f6-e0e1-44b1-be36-fa4df3220ddb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 09:44:02 compute-0 nova_compute[189491]: 2025-12-01 09:44:02.953 189495 DEBUG nova.scheduler.client.report [None req-b338cc88-ba57-4677-970d-e2860ad08ca2 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] Inventory has not changed for provider 143c7fe7-af1f-477a-978c-6a994d785d98 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 09:44:03 compute-0 nova_compute[189491]: 2025-12-01 09:44:03.119 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:44:03 compute-0 nova_compute[189491]: 2025-12-01 09:44:03.272 189495 DEBUG oslo_concurrency.lockutils [None req-b338cc88-ba57-4677-970d-e2860ad08ca2 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.838s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:44:03 compute-0 nova_compute[189491]: 2025-12-01 09:44:03.402 189495 INFO nova.scheduler.client.report [None req-b338cc88-ba57-4677-970d-e2860ad08ca2 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] Deleted allocations for instance 38643437-7822-4834-8301-02d3402cad15#033[00m
Dec  1 09:44:03 compute-0 nova_compute[189491]: 2025-12-01 09:44:03.893 189495 DEBUG oslo_concurrency.lockutils [None req-b338cc88-ba57-4677-970d-e2860ad08ca2 688e0c65604244fb9d423018bc88d238 d7764856ebb94acbaa0b40cbbf09cb3d - - default default] Lock "38643437-7822-4834-8301-02d3402cad15" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.076s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:44:06 compute-0 nova_compute[189491]: 2025-12-01 09:44:06.203 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:44:07 compute-0 ovn_controller[97794]: 2025-12-01T09:44:07Z|00094|binding|INFO|Releasing lport 8e3cbcf0-fa9b-4b7e-8d20-6f493c3e3d90 from this chassis (sb_readonly=0)
Dec  1 09:44:07 compute-0 nova_compute[189491]: 2025-12-01 09:44:07.619 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:44:07 compute-0 podman[252894]: 2025-12-01 09:44:07.72241558 +0000 UTC m=+0.075508034 container health_status f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Dec  1 09:44:07 compute-0 podman[252893]: 2025-12-01 09:44:07.733441839 +0000 UTC m=+0.088970293 container health_status 110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., release=1755695350, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, io.openshift.expose-services=, config_id=edpm, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, managed_by=edpm_ansible, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers)
Dec  1 09:44:08 compute-0 nova_compute[189491]: 2025-12-01 09:44:08.121 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:44:10 compute-0 ovn_controller[97794]: 2025-12-01T09:44:10Z|00095|binding|INFO|Releasing lport 8e3cbcf0-fa9b-4b7e-8d20-6f493c3e3d90 from this chassis (sb_readonly=0)
Dec  1 09:44:11 compute-0 nova_compute[189491]: 2025-12-01 09:44:11.048 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:44:11 compute-0 nova_compute[189491]: 2025-12-01 09:44:11.205 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:44:11 compute-0 podman[252936]: 2025-12-01 09:44:11.708288799 +0000 UTC m=+0.071086137 container health_status 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd)
Dec  1 09:44:11 compute-0 podman[252937]: 2025-12-01 09:44:11.747070805 +0000 UTC m=+0.107831764 container health_status 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  1 09:44:12 compute-0 ovn_controller[97794]: 2025-12-01T09:44:12Z|00096|binding|INFO|Releasing lport 8e3cbcf0-fa9b-4b7e-8d20-6f493c3e3d90 from this chassis (sb_readonly=0)
Dec  1 09:44:13 compute-0 nova_compute[189491]: 2025-12-01 09:44:13.020 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:44:13 compute-0 nova_compute[189491]: 2025-12-01 09:44:13.123 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:44:13 compute-0 nova_compute[189491]: 2025-12-01 09:44:13.181 189495 DEBUG oslo_concurrency.lockutils [None req-190f2312-458e-447d-add1-a848d4f78c4e 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Acquiring lock "70f48496-14bd-4e6f-8706-262d8e6b9510" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:44:13 compute-0 nova_compute[189491]: 2025-12-01 09:44:13.182 189495 DEBUG oslo_concurrency.lockutils [None req-190f2312-458e-447d-add1-a848d4f78c4e 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Lock "70f48496-14bd-4e6f-8706-262d8e6b9510" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:44:13 compute-0 nova_compute[189491]: 2025-12-01 09:44:13.263 189495 DEBUG nova.compute.manager [None req-190f2312-458e-447d-add1-a848d4f78c4e 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] [instance: 70f48496-14bd-4e6f-8706-262d8e6b9510] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  1 09:44:13 compute-0 nova_compute[189491]: 2025-12-01 09:44:13.435 189495 DEBUG oslo_concurrency.lockutils [None req-190f2312-458e-447d-add1-a848d4f78c4e 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:44:13 compute-0 nova_compute[189491]: 2025-12-01 09:44:13.436 189495 DEBUG oslo_concurrency.lockutils [None req-190f2312-458e-447d-add1-a848d4f78c4e 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:44:13 compute-0 nova_compute[189491]: 2025-12-01 09:44:13.445 189495 DEBUG nova.virt.hardware [None req-190f2312-458e-447d-add1-a848d4f78c4e 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  1 09:44:13 compute-0 nova_compute[189491]: 2025-12-01 09:44:13.446 189495 INFO nova.compute.claims [None req-190f2312-458e-447d-add1-a848d4f78c4e 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] [instance: 70f48496-14bd-4e6f-8706-262d8e6b9510] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  1 09:44:13 compute-0 nova_compute[189491]: 2025-12-01 09:44:13.581 189495 DEBUG nova.compute.provider_tree [None req-190f2312-458e-447d-add1-a848d4f78c4e 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Inventory has not changed in ProviderTree for provider: 143c7fe7-af1f-477a-978c-6a994d785d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 09:44:13 compute-0 nova_compute[189491]: 2025-12-01 09:44:13.761 189495 DEBUG nova.scheduler.client.report [None req-190f2312-458e-447d-add1-a848d4f78c4e 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Inventory has not changed for provider 143c7fe7-af1f-477a-978c-6a994d785d98 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 09:44:13 compute-0 nova_compute[189491]: 2025-12-01 09:44:13.789 189495 DEBUG oslo_concurrency.lockutils [None req-190f2312-458e-447d-add1-a848d4f78c4e 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.353s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:44:13 compute-0 nova_compute[189491]: 2025-12-01 09:44:13.790 189495 DEBUG nova.compute.manager [None req-190f2312-458e-447d-add1-a848d4f78c4e 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] [instance: 70f48496-14bd-4e6f-8706-262d8e6b9510] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  1 09:44:13 compute-0 nova_compute[189491]: 2025-12-01 09:44:13.943 189495 DEBUG nova.compute.manager [None req-190f2312-458e-447d-add1-a848d4f78c4e 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] [instance: 70f48496-14bd-4e6f-8706-262d8e6b9510] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  1 09:44:13 compute-0 nova_compute[189491]: 2025-12-01 09:44:13.944 189495 DEBUG nova.network.neutron [None req-190f2312-458e-447d-add1-a848d4f78c4e 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] [instance: 70f48496-14bd-4e6f-8706-262d8e6b9510] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  1 09:44:14 compute-0 nova_compute[189491]: 2025-12-01 09:44:14.016 189495 INFO nova.virt.libvirt.driver [None req-190f2312-458e-447d-add1-a848d4f78c4e 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] [instance: 70f48496-14bd-4e6f-8706-262d8e6b9510] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  1 09:44:14 compute-0 nova_compute[189491]: 2025-12-01 09:44:14.036 189495 DEBUG nova.compute.manager [None req-190f2312-458e-447d-add1-a848d4f78c4e 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] [instance: 70f48496-14bd-4e6f-8706-262d8e6b9510] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  1 09:44:14 compute-0 nova_compute[189491]: 2025-12-01 09:44:14.157 189495 DEBUG nova.compute.manager [None req-190f2312-458e-447d-add1-a848d4f78c4e 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] [instance: 70f48496-14bd-4e6f-8706-262d8e6b9510] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  1 09:44:14 compute-0 nova_compute[189491]: 2025-12-01 09:44:14.160 189495 DEBUG nova.virt.libvirt.driver [None req-190f2312-458e-447d-add1-a848d4f78c4e 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] [instance: 70f48496-14bd-4e6f-8706-262d8e6b9510] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  1 09:44:14 compute-0 nova_compute[189491]: 2025-12-01 09:44:14.161 189495 INFO nova.virt.libvirt.driver [None req-190f2312-458e-447d-add1-a848d4f78c4e 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] [instance: 70f48496-14bd-4e6f-8706-262d8e6b9510] Creating image(s)#033[00m
Dec  1 09:44:14 compute-0 nova_compute[189491]: 2025-12-01 09:44:14.162 189495 DEBUG oslo_concurrency.lockutils [None req-190f2312-458e-447d-add1-a848d4f78c4e 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Acquiring lock "/var/lib/nova/instances/70f48496-14bd-4e6f-8706-262d8e6b9510/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:44:14 compute-0 nova_compute[189491]: 2025-12-01 09:44:14.163 189495 DEBUG oslo_concurrency.lockutils [None req-190f2312-458e-447d-add1-a848d4f78c4e 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Lock "/var/lib/nova/instances/70f48496-14bd-4e6f-8706-262d8e6b9510/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:44:14 compute-0 nova_compute[189491]: 2025-12-01 09:44:14.164 189495 DEBUG oslo_concurrency.lockutils [None req-190f2312-458e-447d-add1-a848d4f78c4e 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Lock "/var/lib/nova/instances/70f48496-14bd-4e6f-8706-262d8e6b9510/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:44:14 compute-0 nova_compute[189491]: 2025-12-01 09:44:14.183 189495 DEBUG oslo_concurrency.processutils [None req-190f2312-458e-447d-add1-a848d4f78c4e 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bfffb0fe9ffb8885e11e7a8e92aeafe5ed4e87fd --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:44:14 compute-0 nova_compute[189491]: 2025-12-01 09:44:14.270 189495 DEBUG oslo_concurrency.processutils [None req-190f2312-458e-447d-add1-a848d4f78c4e 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bfffb0fe9ffb8885e11e7a8e92aeafe5ed4e87fd --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:44:14 compute-0 nova_compute[189491]: 2025-12-01 09:44:14.273 189495 DEBUG oslo_concurrency.lockutils [None req-190f2312-458e-447d-add1-a848d4f78c4e 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Acquiring lock "bfffb0fe9ffb8885e11e7a8e92aeafe5ed4e87fd" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:44:14 compute-0 nova_compute[189491]: 2025-12-01 09:44:14.274 189495 DEBUG oslo_concurrency.lockutils [None req-190f2312-458e-447d-add1-a848d4f78c4e 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Lock "bfffb0fe9ffb8885e11e7a8e92aeafe5ed4e87fd" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:44:14 compute-0 nova_compute[189491]: 2025-12-01 09:44:14.287 189495 DEBUG oslo_concurrency.processutils [None req-190f2312-458e-447d-add1-a848d4f78c4e 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bfffb0fe9ffb8885e11e7a8e92aeafe5ed4e87fd --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:44:14 compute-0 nova_compute[189491]: 2025-12-01 09:44:14.355 189495 DEBUG oslo_concurrency.processutils [None req-190f2312-458e-447d-add1-a848d4f78c4e 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bfffb0fe9ffb8885e11e7a8e92aeafe5ed4e87fd --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:44:14 compute-0 nova_compute[189491]: 2025-12-01 09:44:14.356 189495 DEBUG oslo_concurrency.processutils [None req-190f2312-458e-447d-add1-a848d4f78c4e 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/bfffb0fe9ffb8885e11e7a8e92aeafe5ed4e87fd,backing_fmt=raw /var/lib/nova/instances/70f48496-14bd-4e6f-8706-262d8e6b9510/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:44:14 compute-0 nova_compute[189491]: 2025-12-01 09:44:14.422 189495 DEBUG oslo_concurrency.processutils [None req-190f2312-458e-447d-add1-a848d4f78c4e 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/bfffb0fe9ffb8885e11e7a8e92aeafe5ed4e87fd,backing_fmt=raw /var/lib/nova/instances/70f48496-14bd-4e6f-8706-262d8e6b9510/disk 1073741824" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:44:14 compute-0 nova_compute[189491]: 2025-12-01 09:44:14.425 189495 DEBUG oslo_concurrency.lockutils [None req-190f2312-458e-447d-add1-a848d4f78c4e 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Lock "bfffb0fe9ffb8885e11e7a8e92aeafe5ed4e87fd" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.150s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:44:14 compute-0 nova_compute[189491]: 2025-12-01 09:44:14.426 189495 DEBUG oslo_concurrency.processutils [None req-190f2312-458e-447d-add1-a848d4f78c4e 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bfffb0fe9ffb8885e11e7a8e92aeafe5ed4e87fd --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:44:14 compute-0 nova_compute[189491]: 2025-12-01 09:44:14.467 189495 DEBUG nova.policy [None req-190f2312-458e-447d-add1-a848d4f78c4e 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3f19699d7cb4493292a31daef496a1c2', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ee60ff0d117e468aa42c7d39022568ea', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  1 09:44:14 compute-0 nova_compute[189491]: 2025-12-01 09:44:14.522 189495 DEBUG oslo_concurrency.processutils [None req-190f2312-458e-447d-add1-a848d4f78c4e 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bfffb0fe9ffb8885e11e7a8e92aeafe5ed4e87fd --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:44:14 compute-0 nova_compute[189491]: 2025-12-01 09:44:14.523 189495 DEBUG nova.virt.disk.api [None req-190f2312-458e-447d-add1-a848d4f78c4e 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Checking if we can resize image /var/lib/nova/instances/70f48496-14bd-4e6f-8706-262d8e6b9510/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Dec  1 09:44:14 compute-0 nova_compute[189491]: 2025-12-01 09:44:14.523 189495 DEBUG oslo_concurrency.processutils [None req-190f2312-458e-447d-add1-a848d4f78c4e 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/70f48496-14bd-4e6f-8706-262d8e6b9510/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:44:14 compute-0 nova_compute[189491]: 2025-12-01 09:44:14.605 189495 DEBUG oslo_concurrency.processutils [None req-190f2312-458e-447d-add1-a848d4f78c4e 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/70f48496-14bd-4e6f-8706-262d8e6b9510/disk --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:44:14 compute-0 nova_compute[189491]: 2025-12-01 09:44:14.606 189495 DEBUG nova.virt.disk.api [None req-190f2312-458e-447d-add1-a848d4f78c4e 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Cannot resize image /var/lib/nova/instances/70f48496-14bd-4e6f-8706-262d8e6b9510/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Dec  1 09:44:14 compute-0 nova_compute[189491]: 2025-12-01 09:44:14.607 189495 DEBUG nova.objects.instance [None req-190f2312-458e-447d-add1-a848d4f78c4e 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Lazy-loading 'migration_context' on Instance uuid 70f48496-14bd-4e6f-8706-262d8e6b9510 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 09:44:14 compute-0 nova_compute[189491]: 2025-12-01 09:44:14.636 189495 DEBUG nova.virt.libvirt.driver [None req-190f2312-458e-447d-add1-a848d4f78c4e 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] [instance: 70f48496-14bd-4e6f-8706-262d8e6b9510] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  1 09:44:14 compute-0 nova_compute[189491]: 2025-12-01 09:44:14.636 189495 DEBUG nova.virt.libvirt.driver [None req-190f2312-458e-447d-add1-a848d4f78c4e 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] [instance: 70f48496-14bd-4e6f-8706-262d8e6b9510] Ensure instance console log exists: /var/lib/nova/instances/70f48496-14bd-4e6f-8706-262d8e6b9510/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  1 09:44:14 compute-0 nova_compute[189491]: 2025-12-01 09:44:14.637 189495 DEBUG oslo_concurrency.lockutils [None req-190f2312-458e-447d-add1-a848d4f78c4e 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:44:14 compute-0 nova_compute[189491]: 2025-12-01 09:44:14.639 189495 DEBUG oslo_concurrency.lockutils [None req-190f2312-458e-447d-add1-a848d4f78c4e 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:44:14 compute-0 nova_compute[189491]: 2025-12-01 09:44:14.641 189495 DEBUG oslo_concurrency.lockutils [None req-190f2312-458e-447d-add1-a848d4f78c4e 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:44:15 compute-0 nova_compute[189491]: 2025-12-01 09:44:15.591 189495 DEBUG nova.network.neutron [None req-190f2312-458e-447d-add1-a848d4f78c4e 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] [instance: 70f48496-14bd-4e6f-8706-262d8e6b9510] Successfully created port: 9ba63f14-2eaa-45bf-8c16-59bd3a7893de _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  1 09:44:16 compute-0 nova_compute[189491]: 2025-12-01 09:44:16.100 189495 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764582241.097509, 38643437-7822-4834-8301-02d3402cad15 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 09:44:16 compute-0 nova_compute[189491]: 2025-12-01 09:44:16.101 189495 INFO nova.compute.manager [-] [instance: 38643437-7822-4834-8301-02d3402cad15] VM Stopped (Lifecycle Event)#033[00m
Dec  1 09:44:16 compute-0 nova_compute[189491]: 2025-12-01 09:44:16.120 189495 DEBUG nova.compute.manager [None req-3ff63181-24d6-4e62-864b-77c64f448b7c - - - - - -] [instance: 38643437-7822-4834-8301-02d3402cad15] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 09:44:16 compute-0 nova_compute[189491]: 2025-12-01 09:44:16.210 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:44:16 compute-0 nova_compute[189491]: 2025-12-01 09:44:16.913 189495 DEBUG nova.network.neutron [None req-190f2312-458e-447d-add1-a848d4f78c4e 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] [instance: 70f48496-14bd-4e6f-8706-262d8e6b9510] Successfully updated port: 9ba63f14-2eaa-45bf-8c16-59bd3a7893de _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  1 09:44:16 compute-0 nova_compute[189491]: 2025-12-01 09:44:16.939 189495 DEBUG oslo_concurrency.lockutils [None req-190f2312-458e-447d-add1-a848d4f78c4e 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Acquiring lock "refresh_cache-70f48496-14bd-4e6f-8706-262d8e6b9510" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 09:44:16 compute-0 nova_compute[189491]: 2025-12-01 09:44:16.940 189495 DEBUG oslo_concurrency.lockutils [None req-190f2312-458e-447d-add1-a848d4f78c4e 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Acquired lock "refresh_cache-70f48496-14bd-4e6f-8706-262d8e6b9510" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 09:44:16 compute-0 nova_compute[189491]: 2025-12-01 09:44:16.941 189495 DEBUG nova.network.neutron [None req-190f2312-458e-447d-add1-a848d4f78c4e 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] [instance: 70f48496-14bd-4e6f-8706-262d8e6b9510] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  1 09:44:17 compute-0 nova_compute[189491]: 2025-12-01 09:44:17.366 189495 DEBUG nova.compute.manager [req-daab4dbf-2b02-426c-8927-ff48d5dca1ca req-3327b052-aae4-4c7c-9f76-16a896fb4a9d ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 70f48496-14bd-4e6f-8706-262d8e6b9510] Received event network-changed-9ba63f14-2eaa-45bf-8c16-59bd3a7893de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 09:44:17 compute-0 nova_compute[189491]: 2025-12-01 09:44:17.367 189495 DEBUG nova.compute.manager [req-daab4dbf-2b02-426c-8927-ff48d5dca1ca req-3327b052-aae4-4c7c-9f76-16a896fb4a9d ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 70f48496-14bd-4e6f-8706-262d8e6b9510] Refreshing instance network info cache due to event network-changed-9ba63f14-2eaa-45bf-8c16-59bd3a7893de. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  1 09:44:17 compute-0 nova_compute[189491]: 2025-12-01 09:44:17.367 189495 DEBUG oslo_concurrency.lockutils [req-daab4dbf-2b02-426c-8927-ff48d5dca1ca req-3327b052-aae4-4c7c-9f76-16a896fb4a9d ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Acquiring lock "refresh_cache-70f48496-14bd-4e6f-8706-262d8e6b9510" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 09:44:17 compute-0 nova_compute[189491]: 2025-12-01 09:44:17.396 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:44:17 compute-0 nova_compute[189491]: 2025-12-01 09:44:17.528 189495 DEBUG nova.network.neutron [None req-190f2312-458e-447d-add1-a848d4f78c4e 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] [instance: 70f48496-14bd-4e6f-8706-262d8e6b9510] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  1 09:44:18 compute-0 nova_compute[189491]: 2025-12-01 09:44:18.130 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:44:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:19.793 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  1 09:44:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:19.794 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  1 09:44:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:19.795 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b020>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff85046f530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:44:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:19.795 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7ff84c98b0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:44:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:19.796 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84dc55100>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff85046f530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:44:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:19.797 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff85046f530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:44:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:19.797 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff85046f530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:44:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:19.797 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff85046f530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:44:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:19.797 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84ca1c260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff85046f530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:44:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:19.797 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff85046f530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:44:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:19.797 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff85046f530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:44:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:19.798 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84ff01af0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff85046f530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:44:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:19.798 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bb30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff85046f530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:44:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:19.798 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff85046f530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:44:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:19.798 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bb60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff85046f530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:44:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:19.798 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff85046f530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:44:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:19.798 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bbc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff85046f530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:44:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:19.798 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff85046f530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:44:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:19.798 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff85046f530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:44:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:19.799 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff85046f530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:44:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:19.799 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff85046f530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:44:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:19.799 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b5f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff85046f530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:44:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:19.799 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff85046f530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:44:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:19.799 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98be60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff85046f530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:44:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:19.800 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84f216690>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff85046f530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:44:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:19.800 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c9896d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff85046f530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:44:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:19.800 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff85046f530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:44:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:19.800 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98a720>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff85046f530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:44:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:19.800 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff85046f530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:44:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:19.801 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance b5a25e93-8e59-4459-a45e-2d1d2d486bbc from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Dec  1 09:44:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:19.802 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/b5a25e93-8e59-4459-a45e-2d1d2d486bbc -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}5b15b15c247f410e52837a95689cb091041b96c474d34a98b1d5f06140c01501" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Dec  1 09:44:20 compute-0 nova_compute[189491]: 2025-12-01 09:44:20.001 189495 DEBUG nova.network.neutron [None req-190f2312-458e-447d-add1-a848d4f78c4e 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] [instance: 70f48496-14bd-4e6f-8706-262d8e6b9510] Updating instance_info_cache with network_info: [{"id": "9ba63f14-2eaa-45bf-8c16-59bd3a7893de", "address": "fa:16:3e:06:a3:58", "network": {"id": "4f3e9b63-cba6-412e-ba07-d66a8b38af02", "bridge": "br-int", "label": "tempest-network-smoke--1085714181", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee60ff0d117e468aa42c7d39022568ea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ba63f14-2e", "ovs_interfaceid": "9ba63f14-2eaa-45bf-8c16-59bd3a7893de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 09:44:20 compute-0 nova_compute[189491]: 2025-12-01 09:44:20.034 189495 DEBUG oslo_concurrency.lockutils [None req-190f2312-458e-447d-add1-a848d4f78c4e 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Releasing lock "refresh_cache-70f48496-14bd-4e6f-8706-262d8e6b9510" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 09:44:20 compute-0 nova_compute[189491]: 2025-12-01 09:44:20.035 189495 DEBUG nova.compute.manager [None req-190f2312-458e-447d-add1-a848d4f78c4e 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] [instance: 70f48496-14bd-4e6f-8706-262d8e6b9510] Instance network_info: |[{"id": "9ba63f14-2eaa-45bf-8c16-59bd3a7893de", "address": "fa:16:3e:06:a3:58", "network": {"id": "4f3e9b63-cba6-412e-ba07-d66a8b38af02", "bridge": "br-int", "label": "tempest-network-smoke--1085714181", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee60ff0d117e468aa42c7d39022568ea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ba63f14-2e", "ovs_interfaceid": "9ba63f14-2eaa-45bf-8c16-59bd3a7893de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  1 09:44:20 compute-0 nova_compute[189491]: 2025-12-01 09:44:20.036 189495 DEBUG oslo_concurrency.lockutils [req-daab4dbf-2b02-426c-8927-ff48d5dca1ca req-3327b052-aae4-4c7c-9f76-16a896fb4a9d ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Acquired lock "refresh_cache-70f48496-14bd-4e6f-8706-262d8e6b9510" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 09:44:20 compute-0 nova_compute[189491]: 2025-12-01 09:44:20.036 189495 DEBUG nova.network.neutron [req-daab4dbf-2b02-426c-8927-ff48d5dca1ca req-3327b052-aae4-4c7c-9f76-16a896fb4a9d ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 70f48496-14bd-4e6f-8706-262d8e6b9510] Refreshing network info cache for port 9ba63f14-2eaa-45bf-8c16-59bd3a7893de _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  1 09:44:20 compute-0 nova_compute[189491]: 2025-12-01 09:44:20.040 189495 DEBUG nova.virt.libvirt.driver [None req-190f2312-458e-447d-add1-a848d4f78c4e 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] [instance: 70f48496-14bd-4e6f-8706-262d8e6b9510] Start _get_guest_xml network_info=[{"id": "9ba63f14-2eaa-45bf-8c16-59bd3a7893de", "address": "fa:16:3e:06:a3:58", "network": {"id": "4f3e9b63-cba6-412e-ba07-d66a8b38af02", "bridge": "br-int", "label": "tempest-network-smoke--1085714181", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee60ff0d117e468aa42c7d39022568ea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ba63f14-2e", "ovs_interfaceid": "9ba63f14-2eaa-45bf-8c16-59bd3a7893de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-01T09:41:33Z,direct_url=<?>,disk_format='qcow2',id=7ddeffd1-d06f-4a46-9e41-114974daa90e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='fac95b8a995a4174bfa966a8d9d9aa01',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-01T09:41:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'guest_format': None, 'encryption_options': None, 'device_type': 'disk', 'encryption_secret_uuid': None, 'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'image_id': '7ddeffd1-d06f-4a46-9e41-114974daa90e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  1 09:44:20 compute-0 nova_compute[189491]: 2025-12-01 09:44:20.049 189495 WARNING nova.virt.libvirt.driver [None req-190f2312-458e-447d-add1-a848d4f78c4e 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 09:44:20 compute-0 nova_compute[189491]: 2025-12-01 09:44:20.055 189495 DEBUG nova.virt.libvirt.host [None req-190f2312-458e-447d-add1-a848d4f78c4e 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  1 09:44:20 compute-0 nova_compute[189491]: 2025-12-01 09:44:20.057 189495 DEBUG nova.virt.libvirt.host [None req-190f2312-458e-447d-add1-a848d4f78c4e 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  1 09:44:20 compute-0 nova_compute[189491]: 2025-12-01 09:44:20.062 189495 DEBUG nova.virt.libvirt.host [None req-190f2312-458e-447d-add1-a848d4f78c4e 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  1 09:44:20 compute-0 nova_compute[189491]: 2025-12-01 09:44:20.063 189495 DEBUG nova.virt.libvirt.host [None req-190f2312-458e-447d-add1-a848d4f78c4e 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  1 09:44:20 compute-0 nova_compute[189491]: 2025-12-01 09:44:20.064 189495 DEBUG nova.virt.libvirt.driver [None req-190f2312-458e-447d-add1-a848d4f78c4e 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  1 09:44:20 compute-0 nova_compute[189491]: 2025-12-01 09:44:20.064 189495 DEBUG nova.virt.hardware [None req-190f2312-458e-447d-add1-a848d4f78c4e 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-01T09:41:32Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='422f041c-a187-4aa2-8167-37f3eb0e89c2',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-01T09:41:33Z,direct_url=<?>,disk_format='qcow2',id=7ddeffd1-d06f-4a46-9e41-114974daa90e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='fac95b8a995a4174bfa966a8d9d9aa01',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-01T09:41:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  1 09:44:20 compute-0 nova_compute[189491]: 2025-12-01 09:44:20.065 189495 DEBUG nova.virt.hardware [None req-190f2312-458e-447d-add1-a848d4f78c4e 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  1 09:44:20 compute-0 nova_compute[189491]: 2025-12-01 09:44:20.065 189495 DEBUG nova.virt.hardware [None req-190f2312-458e-447d-add1-a848d4f78c4e 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  1 09:44:20 compute-0 nova_compute[189491]: 2025-12-01 09:44:20.065 189495 DEBUG nova.virt.hardware [None req-190f2312-458e-447d-add1-a848d4f78c4e 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  1 09:44:20 compute-0 nova_compute[189491]: 2025-12-01 09:44:20.066 189495 DEBUG nova.virt.hardware [None req-190f2312-458e-447d-add1-a848d4f78c4e 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  1 09:44:20 compute-0 nova_compute[189491]: 2025-12-01 09:44:20.066 189495 DEBUG nova.virt.hardware [None req-190f2312-458e-447d-add1-a848d4f78c4e 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  1 09:44:20 compute-0 nova_compute[189491]: 2025-12-01 09:44:20.067 189495 DEBUG nova.virt.hardware [None req-190f2312-458e-447d-add1-a848d4f78c4e 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  1 09:44:20 compute-0 nova_compute[189491]: 2025-12-01 09:44:20.067 189495 DEBUG nova.virt.hardware [None req-190f2312-458e-447d-add1-a848d4f78c4e 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  1 09:44:20 compute-0 nova_compute[189491]: 2025-12-01 09:44:20.067 189495 DEBUG nova.virt.hardware [None req-190f2312-458e-447d-add1-a848d4f78c4e 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  1 09:44:20 compute-0 nova_compute[189491]: 2025-12-01 09:44:20.068 189495 DEBUG nova.virt.hardware [None req-190f2312-458e-447d-add1-a848d4f78c4e 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  1 09:44:20 compute-0 nova_compute[189491]: 2025-12-01 09:44:20.068 189495 DEBUG nova.virt.hardware [None req-190f2312-458e-447d-add1-a848d4f78c4e 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  1 09:44:20 compute-0 nova_compute[189491]: 2025-12-01 09:44:20.073 189495 DEBUG nova.virt.libvirt.vif [None req-190f2312-458e-447d-add1-a848d4f78c4e 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-01T09:44:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-943973460',display_name='tempest-TestNetworkBasicOps-server-943973460',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-943973460',id=9,image_ref='7ddeffd1-d06f-4a46-9e41-114974daa90e',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF0zO5PaN3W4VHI3MwtcjwVXnFCS2bVnALc/xgvovRqym1jyHZHeVTr6rztYp8+lLKApFr2SvhwBydda3c7yRYWVMdYesl/HUKsBijWwjyOiRwFrk6mYhv5XoI8BDBYXvw==',key_name='tempest-TestNetworkBasicOps-240726540',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ee60ff0d117e468aa42c7d39022568ea',ramdisk_id='',reservation_id='r-fqfqoply',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7ddeffd1-d06f-4a46-9e41-114974daa90e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-291434657',owner_user_name='tempest-TestNetworkBasicOps-291434657-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-01T09:44:14Z,user_data=None,user_id='3f19699d7cb4493292a31daef496a1c2',uuid=70f48496-14bd-4e6f-8706-262d8e6b9510,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9ba63f14-2eaa-45bf-8c16-59bd3a7893de", "address": "fa:16:3e:06:a3:58", "network": {"id": "4f3e9b63-cba6-412e-ba07-d66a8b38af02", "bridge": "br-int", "label": "tempest-network-smoke--1085714181", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee60ff0d117e468aa42c7d39022568ea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ba63f14-2e", "ovs_interfaceid": "9ba63f14-2eaa-45bf-8c16-59bd3a7893de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  1 09:44:20 compute-0 nova_compute[189491]: 2025-12-01 09:44:20.073 189495 DEBUG nova.network.os_vif_util [None req-190f2312-458e-447d-add1-a848d4f78c4e 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Converting VIF {"id": "9ba63f14-2eaa-45bf-8c16-59bd3a7893de", "address": "fa:16:3e:06:a3:58", "network": {"id": "4f3e9b63-cba6-412e-ba07-d66a8b38af02", "bridge": "br-int", "label": "tempest-network-smoke--1085714181", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee60ff0d117e468aa42c7d39022568ea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ba63f14-2e", "ovs_interfaceid": "9ba63f14-2eaa-45bf-8c16-59bd3a7893de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 09:44:20 compute-0 nova_compute[189491]: 2025-12-01 09:44:20.074 189495 DEBUG nova.network.os_vif_util [None req-190f2312-458e-447d-add1-a848d4f78c4e 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:06:a3:58,bridge_name='br-int',has_traffic_filtering=True,id=9ba63f14-2eaa-45bf-8c16-59bd3a7893de,network=Network(4f3e9b63-cba6-412e-ba07-d66a8b38af02),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9ba63f14-2e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 09:44:20 compute-0 nova_compute[189491]: 2025-12-01 09:44:20.075 189495 DEBUG nova.objects.instance [None req-190f2312-458e-447d-add1-a848d4f78c4e 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Lazy-loading 'pci_devices' on Instance uuid 70f48496-14bd-4e6f-8706-262d8e6b9510 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 09:44:20 compute-0 nova_compute[189491]: 2025-12-01 09:44:20.094 189495 DEBUG nova.virt.libvirt.driver [None req-190f2312-458e-447d-add1-a848d4f78c4e 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] [instance: 70f48496-14bd-4e6f-8706-262d8e6b9510] End _get_guest_xml xml=<domain type="kvm">
Dec  1 09:44:20 compute-0 nova_compute[189491]:  <uuid>70f48496-14bd-4e6f-8706-262d8e6b9510</uuid>
Dec  1 09:44:20 compute-0 nova_compute[189491]:  <name>instance-00000009</name>
Dec  1 09:44:20 compute-0 nova_compute[189491]:  <memory>131072</memory>
Dec  1 09:44:20 compute-0 nova_compute[189491]:  <vcpu>1</vcpu>
Dec  1 09:44:20 compute-0 nova_compute[189491]:  <metadata>
Dec  1 09:44:20 compute-0 nova_compute[189491]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  1 09:44:20 compute-0 nova_compute[189491]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  1 09:44:20 compute-0 nova_compute[189491]:      <nova:name>tempest-TestNetworkBasicOps-server-943973460</nova:name>
Dec  1 09:44:20 compute-0 nova_compute[189491]:      <nova:creationTime>2025-12-01 09:44:20</nova:creationTime>
Dec  1 09:44:20 compute-0 nova_compute[189491]:      <nova:flavor name="m1.nano">
Dec  1 09:44:20 compute-0 nova_compute[189491]:        <nova:memory>128</nova:memory>
Dec  1 09:44:20 compute-0 nova_compute[189491]:        <nova:disk>1</nova:disk>
Dec  1 09:44:20 compute-0 nova_compute[189491]:        <nova:swap>0</nova:swap>
Dec  1 09:44:20 compute-0 nova_compute[189491]:        <nova:ephemeral>0</nova:ephemeral>
Dec  1 09:44:20 compute-0 nova_compute[189491]:        <nova:vcpus>1</nova:vcpus>
Dec  1 09:44:20 compute-0 nova_compute[189491]:      </nova:flavor>
Dec  1 09:44:20 compute-0 nova_compute[189491]:      <nova:owner>
Dec  1 09:44:20 compute-0 nova_compute[189491]:        <nova:user uuid="3f19699d7cb4493292a31daef496a1c2">tempest-TestNetworkBasicOps-291434657-project-member</nova:user>
Dec  1 09:44:20 compute-0 nova_compute[189491]:        <nova:project uuid="ee60ff0d117e468aa42c7d39022568ea">tempest-TestNetworkBasicOps-291434657</nova:project>
Dec  1 09:44:20 compute-0 nova_compute[189491]:      </nova:owner>
Dec  1 09:44:20 compute-0 nova_compute[189491]:      <nova:root type="image" uuid="7ddeffd1-d06f-4a46-9e41-114974daa90e"/>
Dec  1 09:44:20 compute-0 nova_compute[189491]:      <nova:ports>
Dec  1 09:44:20 compute-0 nova_compute[189491]:        <nova:port uuid="9ba63f14-2eaa-45bf-8c16-59bd3a7893de">
Dec  1 09:44:20 compute-0 nova_compute[189491]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Dec  1 09:44:20 compute-0 nova_compute[189491]:        </nova:port>
Dec  1 09:44:20 compute-0 nova_compute[189491]:      </nova:ports>
Dec  1 09:44:20 compute-0 nova_compute[189491]:    </nova:instance>
Dec  1 09:44:20 compute-0 nova_compute[189491]:  </metadata>
Dec  1 09:44:20 compute-0 nova_compute[189491]:  <sysinfo type="smbios">
Dec  1 09:44:20 compute-0 nova_compute[189491]:    <system>
Dec  1 09:44:20 compute-0 nova_compute[189491]:      <entry name="manufacturer">RDO</entry>
Dec  1 09:44:20 compute-0 nova_compute[189491]:      <entry name="product">OpenStack Compute</entry>
Dec  1 09:44:20 compute-0 nova_compute[189491]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  1 09:44:20 compute-0 nova_compute[189491]:      <entry name="serial">70f48496-14bd-4e6f-8706-262d8e6b9510</entry>
Dec  1 09:44:20 compute-0 nova_compute[189491]:      <entry name="uuid">70f48496-14bd-4e6f-8706-262d8e6b9510</entry>
Dec  1 09:44:20 compute-0 nova_compute[189491]:      <entry name="family">Virtual Machine</entry>
Dec  1 09:44:20 compute-0 nova_compute[189491]:    </system>
Dec  1 09:44:20 compute-0 nova_compute[189491]:  </sysinfo>
Dec  1 09:44:20 compute-0 nova_compute[189491]:  <os>
Dec  1 09:44:20 compute-0 nova_compute[189491]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  1 09:44:20 compute-0 nova_compute[189491]:    <boot dev="hd"/>
Dec  1 09:44:20 compute-0 nova_compute[189491]:    <smbios mode="sysinfo"/>
Dec  1 09:44:20 compute-0 nova_compute[189491]:  </os>
Dec  1 09:44:20 compute-0 nova_compute[189491]:  <features>
Dec  1 09:44:20 compute-0 nova_compute[189491]:    <acpi/>
Dec  1 09:44:20 compute-0 nova_compute[189491]:    <apic/>
Dec  1 09:44:20 compute-0 nova_compute[189491]:    <vmcoreinfo/>
Dec  1 09:44:20 compute-0 nova_compute[189491]:  </features>
Dec  1 09:44:20 compute-0 nova_compute[189491]:  <clock offset="utc">
Dec  1 09:44:20 compute-0 nova_compute[189491]:    <timer name="pit" tickpolicy="delay"/>
Dec  1 09:44:20 compute-0 nova_compute[189491]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  1 09:44:20 compute-0 nova_compute[189491]:    <timer name="hpet" present="no"/>
Dec  1 09:44:20 compute-0 nova_compute[189491]:  </clock>
Dec  1 09:44:20 compute-0 nova_compute[189491]:  <cpu mode="host-model" match="exact">
Dec  1 09:44:20 compute-0 nova_compute[189491]:    <topology sockets="1" cores="1" threads="1"/>
Dec  1 09:44:20 compute-0 nova_compute[189491]:  </cpu>
Dec  1 09:44:20 compute-0 nova_compute[189491]:  <devices>
Dec  1 09:44:20 compute-0 nova_compute[189491]:    <disk type="file" device="disk">
Dec  1 09:44:20 compute-0 nova_compute[189491]:      <driver name="qemu" type="qcow2" cache="none"/>
Dec  1 09:44:20 compute-0 nova_compute[189491]:      <source file="/var/lib/nova/instances/70f48496-14bd-4e6f-8706-262d8e6b9510/disk"/>
Dec  1 09:44:20 compute-0 nova_compute[189491]:      <target dev="vda" bus="virtio"/>
Dec  1 09:44:20 compute-0 nova_compute[189491]:    </disk>
Dec  1 09:44:20 compute-0 nova_compute[189491]:    <disk type="file" device="cdrom">
Dec  1 09:44:20 compute-0 nova_compute[189491]:      <driver name="qemu" type="raw" cache="none"/>
Dec  1 09:44:20 compute-0 nova_compute[189491]:      <source file="/var/lib/nova/instances/70f48496-14bd-4e6f-8706-262d8e6b9510/disk.config"/>
Dec  1 09:44:20 compute-0 nova_compute[189491]:      <target dev="sda" bus="sata"/>
Dec  1 09:44:20 compute-0 nova_compute[189491]:    </disk>
Dec  1 09:44:20 compute-0 nova_compute[189491]:    <interface type="ethernet">
Dec  1 09:44:20 compute-0 nova_compute[189491]:      <mac address="fa:16:3e:06:a3:58"/>
Dec  1 09:44:20 compute-0 nova_compute[189491]:      <model type="virtio"/>
Dec  1 09:44:20 compute-0 nova_compute[189491]:      <driver name="vhost" rx_queue_size="512"/>
Dec  1 09:44:20 compute-0 nova_compute[189491]:      <mtu size="1442"/>
Dec  1 09:44:20 compute-0 nova_compute[189491]:      <target dev="tap9ba63f14-2e"/>
Dec  1 09:44:20 compute-0 nova_compute[189491]:    </interface>
Dec  1 09:44:20 compute-0 nova_compute[189491]:    <serial type="pty">
Dec  1 09:44:20 compute-0 nova_compute[189491]:      <log file="/var/lib/nova/instances/70f48496-14bd-4e6f-8706-262d8e6b9510/console.log" append="off"/>
Dec  1 09:44:20 compute-0 nova_compute[189491]:    </serial>
Dec  1 09:44:20 compute-0 nova_compute[189491]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  1 09:44:20 compute-0 nova_compute[189491]:    <video>
Dec  1 09:44:20 compute-0 nova_compute[189491]:      <model type="virtio"/>
Dec  1 09:44:20 compute-0 nova_compute[189491]:    </video>
Dec  1 09:44:20 compute-0 nova_compute[189491]:    <input type="tablet" bus="usb"/>
Dec  1 09:44:20 compute-0 nova_compute[189491]:    <rng model="virtio">
Dec  1 09:44:20 compute-0 nova_compute[189491]:      <backend model="random">/dev/urandom</backend>
Dec  1 09:44:20 compute-0 nova_compute[189491]:    </rng>
Dec  1 09:44:20 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root"/>
Dec  1 09:44:20 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:44:20 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:44:20 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:44:20 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:44:20 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:44:20 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:44:20 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:44:20 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:44:20 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:44:20 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:44:20 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:44:20 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:44:20 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:44:20 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:44:20 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:44:20 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:44:20 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:44:20 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:44:20 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:44:20 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:44:20 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:44:20 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:44:20 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:44:20 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:44:20 compute-0 nova_compute[189491]:    <controller type="usb" index="0"/>
Dec  1 09:44:20 compute-0 nova_compute[189491]:    <memballoon model="virtio">
Dec  1 09:44:20 compute-0 nova_compute[189491]:      <stats period="10"/>
Dec  1 09:44:20 compute-0 nova_compute[189491]:    </memballoon>
Dec  1 09:44:20 compute-0 nova_compute[189491]:  </devices>
Dec  1 09:44:20 compute-0 nova_compute[189491]: </domain>
Dec  1 09:44:20 compute-0 nova_compute[189491]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  1 09:44:20 compute-0 nova_compute[189491]: 2025-12-01 09:44:20.096 189495 DEBUG nova.compute.manager [None req-190f2312-458e-447d-add1-a848d4f78c4e 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] [instance: 70f48496-14bd-4e6f-8706-262d8e6b9510] Preparing to wait for external event network-vif-plugged-9ba63f14-2eaa-45bf-8c16-59bd3a7893de prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  1 09:44:20 compute-0 nova_compute[189491]: 2025-12-01 09:44:20.096 189495 DEBUG oslo_concurrency.lockutils [None req-190f2312-458e-447d-add1-a848d4f78c4e 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Acquiring lock "70f48496-14bd-4e6f-8706-262d8e6b9510-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:44:20 compute-0 nova_compute[189491]: 2025-12-01 09:44:20.096 189495 DEBUG oslo_concurrency.lockutils [None req-190f2312-458e-447d-add1-a848d4f78c4e 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Lock "70f48496-14bd-4e6f-8706-262d8e6b9510-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:44:20 compute-0 nova_compute[189491]: 2025-12-01 09:44:20.097 189495 DEBUG oslo_concurrency.lockutils [None req-190f2312-458e-447d-add1-a848d4f78c4e 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Lock "70f48496-14bd-4e6f-8706-262d8e6b9510-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:44:20 compute-0 nova_compute[189491]: 2025-12-01 09:44:20.098 189495 DEBUG nova.virt.libvirt.vif [None req-190f2312-458e-447d-add1-a848d4f78c4e 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-01T09:44:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-943973460',display_name='tempest-TestNetworkBasicOps-server-943973460',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-943973460',id=9,image_ref='7ddeffd1-d06f-4a46-9e41-114974daa90e',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF0zO5PaN3W4VHI3MwtcjwVXnFCS2bVnALc/xgvovRqym1jyHZHeVTr6rztYp8+lLKApFr2SvhwBydda3c7yRYWVMdYesl/HUKsBijWwjyOiRwFrk6mYhv5XoI8BDBYXvw==',key_name='tempest-TestNetworkBasicOps-240726540',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ee60ff0d117e468aa42c7d39022568ea',ramdisk_id='',reservation_id='r-fqfqoply',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7ddeffd1-d06f-4a46-9e41-114974daa90e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-291434657',owner_user_name='tempest-TestNetworkBasicOps-291434657-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-01T09:44:14Z,user_data=None,user_id='3f19699d7cb4493292a31daef496a1c2',uuid=70f48496-14bd-4e6f-8706-262d8e6b9510,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9ba63f14-2eaa-45bf-8c16-59bd3a7893de", "address": "fa:16:3e:06:a3:58", "network": {"id": "4f3e9b63-cba6-412e-ba07-d66a8b38af02", "bridge": "br-int", "label": "tempest-network-smoke--1085714181", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee60ff0d117e468aa42c7d39022568ea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ba63f14-2e", "ovs_interfaceid": "9ba63f14-2eaa-45bf-8c16-59bd3a7893de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  1 09:44:20 compute-0 nova_compute[189491]: 2025-12-01 09:44:20.098 189495 DEBUG nova.network.os_vif_util [None req-190f2312-458e-447d-add1-a848d4f78c4e 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Converting VIF {"id": "9ba63f14-2eaa-45bf-8c16-59bd3a7893de", "address": "fa:16:3e:06:a3:58", "network": {"id": "4f3e9b63-cba6-412e-ba07-d66a8b38af02", "bridge": "br-int", "label": "tempest-network-smoke--1085714181", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee60ff0d117e468aa42c7d39022568ea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ba63f14-2e", "ovs_interfaceid": "9ba63f14-2eaa-45bf-8c16-59bd3a7893de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 09:44:20 compute-0 nova_compute[189491]: 2025-12-01 09:44:20.099 189495 DEBUG nova.network.os_vif_util [None req-190f2312-458e-447d-add1-a848d4f78c4e 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:06:a3:58,bridge_name='br-int',has_traffic_filtering=True,id=9ba63f14-2eaa-45bf-8c16-59bd3a7893de,network=Network(4f3e9b63-cba6-412e-ba07-d66a8b38af02),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9ba63f14-2e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 09:44:20 compute-0 nova_compute[189491]: 2025-12-01 09:44:20.099 189495 DEBUG os_vif [None req-190f2312-458e-447d-add1-a848d4f78c4e 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:06:a3:58,bridge_name='br-int',has_traffic_filtering=True,id=9ba63f14-2eaa-45bf-8c16-59bd3a7893de,network=Network(4f3e9b63-cba6-412e-ba07-d66a8b38af02),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9ba63f14-2e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  1 09:44:20 compute-0 nova_compute[189491]: 2025-12-01 09:44:20.100 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:44:20 compute-0 nova_compute[189491]: 2025-12-01 09:44:20.100 189495 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:44:20 compute-0 nova_compute[189491]: 2025-12-01 09:44:20.101 189495 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 09:44:20 compute-0 nova_compute[189491]: 2025-12-01 09:44:20.106 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:44:20 compute-0 nova_compute[189491]: 2025-12-01 09:44:20.107 189495 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9ba63f14-2e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:44:20 compute-0 nova_compute[189491]: 2025-12-01 09:44:20.108 189495 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9ba63f14-2e, col_values=(('external_ids', {'iface-id': '9ba63f14-2eaa-45bf-8c16-59bd3a7893de', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:06:a3:58', 'vm-uuid': '70f48496-14bd-4e6f-8706-262d8e6b9510'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:44:20 compute-0 nova_compute[189491]: 2025-12-01 09:44:20.110 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:44:20 compute-0 NetworkManager[56318]: <info>  [1764582260.1115] manager: (tap9ba63f14-2e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/47)
Dec  1 09:44:20 compute-0 nova_compute[189491]: 2025-12-01 09:44:20.113 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  1 09:44:20 compute-0 nova_compute[189491]: 2025-12-01 09:44:20.118 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:44:20 compute-0 nova_compute[189491]: 2025-12-01 09:44:20.120 189495 INFO os_vif [None req-190f2312-458e-447d-add1-a848d4f78c4e 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:06:a3:58,bridge_name='br-int',has_traffic_filtering=True,id=9ba63f14-2eaa-45bf-8c16-59bd3a7893de,network=Network(4f3e9b63-cba6-412e-ba07-d66a8b38af02),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9ba63f14-2e')#033[00m
Dec  1 09:44:20 compute-0 nova_compute[189491]: 2025-12-01 09:44:20.190 189495 DEBUG nova.virt.libvirt.driver [None req-190f2312-458e-447d-add1-a848d4f78c4e 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  1 09:44:20 compute-0 nova_compute[189491]: 2025-12-01 09:44:20.191 189495 DEBUG nova.virt.libvirt.driver [None req-190f2312-458e-447d-add1-a848d4f78c4e 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  1 09:44:20 compute-0 nova_compute[189491]: 2025-12-01 09:44:20.191 189495 DEBUG nova.virt.libvirt.driver [None req-190f2312-458e-447d-add1-a848d4f78c4e 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] No VIF found with MAC fa:16:3e:06:a3:58, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  1 09:44:20 compute-0 nova_compute[189491]: 2025-12-01 09:44:20.192 189495 INFO nova.virt.libvirt.driver [None req-190f2312-458e-447d-add1-a848d4f78c4e 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] [instance: 70f48496-14bd-4e6f-8706-262d8e6b9510] Using config drive#033[00m
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.835 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1981 Content-Type: application/json Date: Mon, 01 Dec 2025 09:44:19 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-bb681381-28b3-4944-834a-15f3ae040f8d x-openstack-request-id: req-bb681381-28b3-4944-834a-15f3ae040f8d _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.836 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "b5a25e93-8e59-4459-a45e-2d1d2d486bbc", "name": "tempest-ServerActionsTestJSON-server-2131740452", "status": "ACTIVE", "tenant_id": "a5fc8e7c1a854418b0a110cc22e69de0", "user_id": "7f215f81d0ab4d1fb34e21bf69e390fe", "metadata": {}, "hostId": "c9b15e809494a2ba06367bdd10e7a66e286bb0335f3ba75d1d3ef9f3", "image": {"id": "7ddeffd1-d06f-4a46-9e41-114974daa90e", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/7ddeffd1-d06f-4a46-9e41-114974daa90e"}]}, "flavor": {"id": "422f041c-a187-4aa2-8167-37f3eb0e89c2", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/422f041c-a187-4aa2-8167-37f3eb0e89c2"}]}, "created": "2025-12-01T09:43:07Z", "updated": "2025-12-01T09:43:18Z", "addresses": {"tempest-ServerActionsTestJSON-1736415669-network": [{"version": 4, "addr": "10.100.0.14", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:81:32:12"}, {"version": 4, "addr": "192.168.122.190", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:81:32:12"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/b5a25e93-8e59-4459-a45e-2d1d2d486bbc"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/b5a25e93-8e59-4459-a45e-2d1d2d486bbc"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": "tempest-keypair-1047797503", "OS-SRV-USG:launched_at": "2025-12-01T09:43:18.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "tempest-securitygroup--503236202"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000008", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.836 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/b5a25e93-8e59-4459-a45e-2d1d2d486bbc used request id req-bb681381-28b3-4944-834a-15f3ae040f8d request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.837 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b5a25e93-8e59-4459-a45e-2d1d2d486bbc', 'name': 'tempest-ServerActionsTestJSON-server-2131740452', 'flavor': {'id': '422f041c-a187-4aa2-8167-37f3eb0e89c2', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '7ddeffd1-d06f-4a46-9e41-114974daa90e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000008', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'a5fc8e7c1a854418b0a110cc22e69de0', 'user_id': '7f215f81d0ab4d1fb34e21bf69e390fe', 'hostId': 'c9b15e809494a2ba06367bdd10e7a66e286bb0335f3ba75d1d3ef9f3', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.837 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.837 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b020>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.838 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b020>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.838 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.839 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-01T09:44:21.838149) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.878 14 DEBUG ceilometer.compute.pollsters [-] b5a25e93-8e59-4459-a45e-2d1d2d486bbc/disk.device.read.bytes volume: 30108160 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.879 14 DEBUG ceilometer.compute.pollsters [-] b5a25e93-8e59-4459-a45e-2d1d2d486bbc/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.879 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.879 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7ff8501e1d00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.879 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.880 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84dc55100>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.880 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84dc55100>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.880 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.881 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-01T09:44:21.880311) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:44:21 compute-0 nova_compute[189491]: 2025-12-01 09:44:21.888 189495 INFO nova.virt.libvirt.driver [None req-190f2312-458e-447d-add1-a848d4f78c4e 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] [instance: 70f48496-14bd-4e6f-8706-262d8e6b9510] Creating config drive at /var/lib/nova/instances/70f48496-14bd-4e6f-8706-262d8e6b9510/disk.config#033[00m
Dec  1 09:44:21 compute-0 nova_compute[189491]: 2025-12-01 09:44:21.895 189495 DEBUG oslo_concurrency.processutils [None req-190f2312-458e-447d-add1-a848d4f78c4e 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/70f48496-14bd-4e6f-8706-262d8e6b9510/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpc3au3mn1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.895 14 DEBUG ceilometer.compute.pollsters [-] b5a25e93-8e59-4459-a45e-2d1d2d486bbc/disk.device.allocation volume: 30679040 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.896 14 DEBUG ceilometer.compute.pollsters [-] b5a25e93-8e59-4459-a45e-2d1d2d486bbc/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.896 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.897 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7ff84c98b110>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.897 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.897 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b140>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.897 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b140>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.897 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.897 14 DEBUG ceilometer.compute.pollsters [-] b5a25e93-8e59-4459-a45e-2d1d2d486bbc/disk.device.read.latency volume: 544867664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.897 14 DEBUG ceilometer.compute.pollsters [-] b5a25e93-8e59-4459-a45e-2d1d2d486bbc/disk.device.read.latency volume: 67797606 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.898 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.898 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7ff84c98b1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.898 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.898 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.898 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.899 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.899 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-01T09:44:21.897451) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.899 14 DEBUG ceilometer.compute.pollsters [-] b5a25e93-8e59-4459-a45e-2d1d2d486bbc/disk.device.usage volume: 29949952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.899 14 DEBUG ceilometer.compute.pollsters [-] b5a25e93-8e59-4459-a45e-2d1d2d486bbc/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.899 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.899 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7ff84c98b200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.900 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-01T09:44:21.899066) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.900 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.900 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.900 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.900 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.900 14 DEBUG ceilometer.compute.pollsters [-] b5a25e93-8e59-4459-a45e-2d1d2d486bbc/disk.device.write.bytes volume: 72929280 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.900 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-01T09:44:21.900454) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.901 14 DEBUG ceilometer.compute.pollsters [-] b5a25e93-8e59-4459-a45e-2d1d2d486bbc/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.901 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.901 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7ff84ca1c230>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.901 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.901 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84ca1c260>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.901 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84ca1c260>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.902 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.902 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-01T09:44:21.901970) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.939 14 DEBUG ceilometer.compute.pollsters [-] b5a25e93-8e59-4459-a45e-2d1d2d486bbc/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.940 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.940 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7ff84c98b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.941 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.941 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.941 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.941 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.941 14 DEBUG ceilometer.compute.pollsters [-] b5a25e93-8e59-4459-a45e-2d1d2d486bbc/disk.device.write.latency volume: 4836894804 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.941 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-01T09:44:21.941450) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.942 14 DEBUG ceilometer.compute.pollsters [-] b5a25e93-8e59-4459-a45e-2d1d2d486bbc/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.942 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.942 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7ff84c98b2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.942 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.942 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.942 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.943 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.943 14 DEBUG ceilometer.compute.pollsters [-] b5a25e93-8e59-4459-a45e-2d1d2d486bbc/disk.device.write.requests volume: 321 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.943 14 DEBUG ceilometer.compute.pollsters [-] b5a25e93-8e59-4459-a45e-2d1d2d486bbc/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.943 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.944 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7ff84c98b620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.944 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-01T09:44:21.943088) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.944 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.944 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84ff01af0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.944 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84ff01af0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.944 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.945 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-01T09:44:21.944693) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.949 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for b5a25e93-8e59-4459-a45e-2d1d2d486bbc / tap9dc75317-7a inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.949 14 DEBUG ceilometer.compute.pollsters [-] b5a25e93-8e59-4459-a45e-2d1d2d486bbc/network.incoming.bytes volume: 1796 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.950 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.950 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7ff84c98b680>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.950 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.950 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bb30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.950 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bb30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.950 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.950 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.951 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-12-01T09:44:21.950550) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.951 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: tempest-ServerActionsTestJSON-server-2131740452>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: tempest-ServerActionsTestJSON-server-2131740452>]
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.951 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7ff84c98b320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.951 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.951 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.951 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.951 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.952 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-01T09:44:21.951802) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.952 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.952 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7ff84c98b920>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.952 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.952 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bb60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.952 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bb60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.952 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.953 14 DEBUG ceilometer.compute.pollsters [-] b5a25e93-8e59-4459-a45e-2d1d2d486bbc/network.incoming.packets volume: 15 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.953 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-01T09:44:21.952864) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.953 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.953 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7ff84c98b380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.953 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.953 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b3b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.954 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b3b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.954 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.954 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.954 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7ff84c98bb90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.954 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.954 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bbc0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.955 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bbc0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.955 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.955 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-01T09:44:21.954142) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.955 14 DEBUG ceilometer.compute.pollsters [-] b5a25e93-8e59-4459-a45e-2d1d2d486bbc/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.955 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.955 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7ff84c98bbf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.956 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-01T09:44:21.955308) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.956 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.956 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bc20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.956 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bc20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.956 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.956 14 DEBUG ceilometer.compute.pollsters [-] b5a25e93-8e59-4459-a45e-2d1d2d486bbc/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.957 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.957 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7ff84c98bc80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.957 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-01T09:44:21.956504) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.957 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.957 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bcb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.957 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bcb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.957 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.957 14 DEBUG ceilometer.compute.pollsters [-] b5a25e93-8e59-4459-a45e-2d1d2d486bbc/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.958 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.958 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-01T09:44:21.957745) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.958 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7ff84c98bd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.958 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.958 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bd40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.958 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bd40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.958 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.959 14 DEBUG ceilometer.compute.pollsters [-] b5a25e93-8e59-4459-a45e-2d1d2d486bbc/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.959 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-01T09:44:21.958782) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.959 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.959 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7ff84c98bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.959 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.959 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bdd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.959 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bdd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.959 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.960 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.960 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: tempest-ServerActionsTestJSON-server-2131740452>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: tempest-ServerActionsTestJSON-server-2131740452>]
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.960 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7ff84c98b5c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.960 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.960 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b5f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.960 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b5f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.961 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.961 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-12-01T09:44:21.959826) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.961 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-01T09:44:21.960974) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.961 14 DEBUG ceilometer.compute.pollsters [-] b5a25e93-8e59-4459-a45e-2d1d2d486bbc/memory.usage volume: 42.73828125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.961 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.961 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7ff84dc55040>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.961 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.961 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b650>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.962 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b650>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.962 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.962 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-01T09:44:21.962143) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.962 14 DEBUG ceilometer.compute.pollsters [-] b5a25e93-8e59-4459-a45e-2d1d2d486bbc/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.962 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.962 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7ff84c98be30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.962 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.963 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98be60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.963 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98be60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.963 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.963 14 DEBUG ceilometer.compute.pollsters [-] b5a25e93-8e59-4459-a45e-2d1d2d486bbc/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.963 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.963 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7ff8503b1b80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.963 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.964 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84f216690>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.964 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-01T09:44:21.963178) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.964 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84f216690>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.964 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.964 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-01T09:44:21.964372) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.964 14 DEBUG ceilometer.compute.pollsters [-] b5a25e93-8e59-4459-a45e-2d1d2d486bbc/cpu volume: 36470000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.964 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.965 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7ff84dab3f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.965 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.965 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c9896d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.965 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c9896d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.965 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.965 14 DEBUG ceilometer.compute.pollsters [-] b5a25e93-8e59-4459-a45e-2d1d2d486bbc/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.965 14 DEBUG ceilometer.compute.pollsters [-] b5a25e93-8e59-4459-a45e-2d1d2d486bbc/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.966 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.966 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7ff84c98bec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.966 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-01T09:44:21.965522) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.966 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.966 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bef0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.966 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bef0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.967 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.967 14 DEBUG ceilometer.compute.pollsters [-] b5a25e93-8e59-4459-a45e-2d1d2d486bbc/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.967 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-01T09:44:21.967136) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.967 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.967 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7ff84c98b170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.967 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.968 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98a720>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.968 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98a720>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.968 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.968 14 DEBUG ceilometer.compute.pollsters [-] b5a25e93-8e59-4459-a45e-2d1d2d486bbc/disk.device.read.requests volume: 1087 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.968 14 DEBUG ceilometer.compute.pollsters [-] b5a25e93-8e59-4459-a45e-2d1d2d486bbc/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.969 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-01T09:44:21.968184) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.969 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.969 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7ff84c98bf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.969 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.969 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bf80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.969 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bf80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.969 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.970 14 DEBUG ceilometer.compute.pollsters [-] b5a25e93-8e59-4459-a45e-2d1d2d486bbc/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.970 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-01T09:44:21.969928) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.970 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.970 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.971 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.971 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.971 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.971 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.971 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.971 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.972 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.972 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.972 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.972 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.972 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.972 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.973 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.973 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.973 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.973 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.973 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.973 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.973 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.973 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.974 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.974 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.974 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.974 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:44:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:44:21.974 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:44:22 compute-0 nova_compute[189491]: 2025-12-01 09:44:22.039 189495 DEBUG oslo_concurrency.processutils [None req-190f2312-458e-447d-add1-a848d4f78c4e 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/70f48496-14bd-4e6f-8706-262d8e6b9510/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpc3au3mn1" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:44:22 compute-0 kernel: tap9ba63f14-2e: entered promiscuous mode
Dec  1 09:44:22 compute-0 NetworkManager[56318]: <info>  [1764582262.1365] manager: (tap9ba63f14-2e): new Tun device (/org/freedesktop/NetworkManager/Devices/48)
Dec  1 09:44:22 compute-0 ovn_controller[97794]: 2025-12-01T09:44:22Z|00097|binding|INFO|Claiming lport 9ba63f14-2eaa-45bf-8c16-59bd3a7893de for this chassis.
Dec  1 09:44:22 compute-0 nova_compute[189491]: 2025-12-01 09:44:22.135 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:44:22 compute-0 ovn_controller[97794]: 2025-12-01T09:44:22Z|00098|binding|INFO|9ba63f14-2eaa-45bf-8c16-59bd3a7893de: Claiming fa:16:3e:06:a3:58 10.100.0.10
Dec  1 09:44:22 compute-0 ovn_controller[97794]: 2025-12-01T09:44:22Z|00099|binding|INFO|Setting lport 9ba63f14-2eaa-45bf-8c16-59bd3a7893de ovn-installed in OVS
Dec  1 09:44:22 compute-0 ovn_controller[97794]: 2025-12-01T09:44:22Z|00100|binding|INFO|Setting lport 9ba63f14-2eaa-45bf-8c16-59bd3a7893de up in Southbound
Dec  1 09:44:22 compute-0 nova_compute[189491]: 2025-12-01 09:44:22.157 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:44:22 compute-0 nova_compute[189491]: 2025-12-01 09:44:22.159 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:44:22 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:44:22.157 106659 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:06:a3:58 10.100.0.10'], port_security=['fa:16:3e:06:a3:58 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '70f48496-14bd-4e6f-8706-262d8e6b9510', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4f3e9b63-cba6-412e-ba07-d66a8b38af02', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ee60ff0d117e468aa42c7d39022568ea', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9f9efef3-36d7-485c-9abd-714c5dc93256', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=45465482-a276-408a-8d6b-656a92e66817, chassis=[<ovs.db.idl.Row object at 0x7ff7f12c3670>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff7f12c3670>], logical_port=9ba63f14-2eaa-45bf-8c16-59bd3a7893de) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 09:44:22 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:44:22.158 106659 INFO neutron.agent.ovn.metadata.agent [-] Port 9ba63f14-2eaa-45bf-8c16-59bd3a7893de in datapath 4f3e9b63-cba6-412e-ba07-d66a8b38af02 bound to our chassis#033[00m
Dec  1 09:44:22 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:44:22.160 106659 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4f3e9b63-cba6-412e-ba07-d66a8b38af02#033[00m
Dec  1 09:44:22 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:44:22.173 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[d2d903d4-eb81-4c44-91d9-881eda574997]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:44:22 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:44:22.174 106659 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4f3e9b63-c1 in ovnmeta-4f3e9b63-cba6-412e-ba07-d66a8b38af02 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  1 09:44:22 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:44:22.176 239818 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4f3e9b63-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  1 09:44:22 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:44:22.177 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[7b1286c4-eb51-451f-820a-4abb256a2373]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:44:22 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:44:22.178 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[b255acb5-3b86-4475-9716-c7d85f640cdc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:44:22 compute-0 systemd-udevd[253040]: Network interface NamePolicy= disabled on kernel command line.
Dec  1 09:44:22 compute-0 systemd-machined[155812]: New machine qemu-9-instance-00000009.
Dec  1 09:44:22 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:44:22.193 106797 DEBUG oslo.privsep.daemon [-] privsep: reply[75905d0f-5886-4bf8-9418-c89a2d5cd8bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:44:22 compute-0 systemd[1]: Started Virtual Machine qemu-9-instance-00000009.
Dec  1 09:44:22 compute-0 NetworkManager[56318]: <info>  [1764582262.2033] device (tap9ba63f14-2e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  1 09:44:22 compute-0 NetworkManager[56318]: <info>  [1764582262.2074] device (tap9ba63f14-2e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  1 09:44:22 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:44:22.218 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[3bfc085e-3251-4659-a5f5-b9f4a8eee04c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:44:22 compute-0 podman[253011]: 2025-12-01 09:44:22.234726159 +0000 UTC m=+0.114626260 container health_status 6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  1 09:44:22 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:44:22.252 239843 DEBUG oslo.privsep.daemon [-] privsep: reply[021b8018-1893-4ab4-95c6-2b97390c49f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:44:22 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:44:22.262 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[d89cd1ce-3cf3-4f0a-9829-509975c7d4a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:44:22 compute-0 NetworkManager[56318]: <info>  [1764582262.2632] manager: (tap4f3e9b63-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/49)
Dec  1 09:44:22 compute-0 podman[253012]: 2025-12-01 09:44:22.275510685 +0000 UTC m=+0.150669321 container health_status ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=edpm, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  1 09:44:22 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:44:22.295 239843 DEBUG oslo.privsep.daemon [-] privsep: reply[91ac2b2a-61c5-4471-8117-a92267b05cee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:44:22 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:44:22.301 239843 DEBUG oslo.privsep.daemon [-] privsep: reply[1bf5fa4b-4248-42da-a0a5-e710168d1639]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:44:22 compute-0 NetworkManager[56318]: <info>  [1764582262.3307] device (tap4f3e9b63-c0): carrier: link connected
Dec  1 09:44:22 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:44:22.340 239843 DEBUG oslo.privsep.daemon [-] privsep: reply[4856b8e4-dcf1-484a-82d9-76a55e586edf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:44:22 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:44:22.358 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[b47b443b-f3ff-4673-82fd-7ac3318dc404]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4f3e9b63-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:66:a3:d6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 550202, 'reachable_time': 33319, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 253092, 'error': None, 'target': 'ovnmeta-4f3e9b63-cba6-412e-ba07-d66a8b38af02', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:44:22 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:44:22.380 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[b4d89120-a085-4d2d-9b5d-1cfacb0e3c06]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe66:a3d6'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 550202, 'tstamp': 550202}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 253093, 'error': None, 'target': 'ovnmeta-4f3e9b63-cba6-412e-ba07-d66a8b38af02', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:44:22 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:44:22.405 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[8e94b806-7af5-41b4-9d21-ebeb87abf408]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4f3e9b63-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:66:a3:d6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 550202, 'reachable_time': 33319, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 253094, 'error': None, 'target': 'ovnmeta-4f3e9b63-cba6-412e-ba07-d66a8b38af02', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:44:22 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:44:22.452 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[94ca7cf1-545d-4913-a78c-71d7aba618bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:44:22 compute-0 nova_compute[189491]: 2025-12-01 09:44:22.482 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:44:22 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:44:22.521 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[3958db09-7a8b-44f8-87e1-c004d5f88131]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:44:22 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:44:22.523 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4f3e9b63-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:44:22 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:44:22.524 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 09:44:22 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:44:22.524 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4f3e9b63-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:44:22 compute-0 nova_compute[189491]: 2025-12-01 09:44:22.526 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:44:22 compute-0 NetworkManager[56318]: <info>  [1764582262.5278] manager: (tap4f3e9b63-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/50)
Dec  1 09:44:22 compute-0 kernel: tap4f3e9b63-c0: entered promiscuous mode
Dec  1 09:44:22 compute-0 nova_compute[189491]: 2025-12-01 09:44:22.530 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:44:22 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:44:22.532 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4f3e9b63-c0, col_values=(('external_ids', {'iface-id': 'a52d5841-c07f-4d57-abbb-5b84c6008243'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:44:22 compute-0 nova_compute[189491]: 2025-12-01 09:44:22.534 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:44:22 compute-0 ovn_controller[97794]: 2025-12-01T09:44:22Z|00101|binding|INFO|Releasing lport a52d5841-c07f-4d57-abbb-5b84c6008243 from this chassis (sb_readonly=0)
Dec  1 09:44:22 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:44:22.536 106659 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4f3e9b63-cba6-412e-ba07-d66a8b38af02.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4f3e9b63-cba6-412e-ba07-d66a8b38af02.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  1 09:44:22 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:44:22.537 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[ba27117e-c0bb-41d5-8539-bc9817d22e3a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:44:22 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:44:22.538 106659 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  1 09:44:22 compute-0 ovn_metadata_agent[106654]: global
Dec  1 09:44:22 compute-0 ovn_metadata_agent[106654]:    log         /dev/log local0 debug
Dec  1 09:44:22 compute-0 ovn_metadata_agent[106654]:    log-tag     haproxy-metadata-proxy-4f3e9b63-cba6-412e-ba07-d66a8b38af02
Dec  1 09:44:22 compute-0 ovn_metadata_agent[106654]:    user        root
Dec  1 09:44:22 compute-0 ovn_metadata_agent[106654]:    group       root
Dec  1 09:44:22 compute-0 ovn_metadata_agent[106654]:    maxconn     1024
Dec  1 09:44:22 compute-0 ovn_metadata_agent[106654]:    pidfile     /var/lib/neutron/external/pids/4f3e9b63-cba6-412e-ba07-d66a8b38af02.pid.haproxy
Dec  1 09:44:22 compute-0 ovn_metadata_agent[106654]:    daemon
Dec  1 09:44:22 compute-0 ovn_metadata_agent[106654]: 
Dec  1 09:44:22 compute-0 ovn_metadata_agent[106654]: defaults
Dec  1 09:44:22 compute-0 ovn_metadata_agent[106654]:    log global
Dec  1 09:44:22 compute-0 ovn_metadata_agent[106654]:    mode http
Dec  1 09:44:22 compute-0 ovn_metadata_agent[106654]:    option httplog
Dec  1 09:44:22 compute-0 ovn_metadata_agent[106654]:    option dontlognull
Dec  1 09:44:22 compute-0 ovn_metadata_agent[106654]:    option http-server-close
Dec  1 09:44:22 compute-0 ovn_metadata_agent[106654]:    option forwardfor
Dec  1 09:44:22 compute-0 ovn_metadata_agent[106654]:    retries                 3
Dec  1 09:44:22 compute-0 ovn_metadata_agent[106654]:    timeout http-request    30s
Dec  1 09:44:22 compute-0 ovn_metadata_agent[106654]:    timeout connect         30s
Dec  1 09:44:22 compute-0 ovn_metadata_agent[106654]:    timeout client          32s
Dec  1 09:44:22 compute-0 ovn_metadata_agent[106654]:    timeout server          32s
Dec  1 09:44:22 compute-0 ovn_metadata_agent[106654]:    timeout http-keep-alive 30s
Dec  1 09:44:22 compute-0 ovn_metadata_agent[106654]: 
Dec  1 09:44:22 compute-0 ovn_metadata_agent[106654]: 
Dec  1 09:44:22 compute-0 ovn_metadata_agent[106654]: listen listener
Dec  1 09:44:22 compute-0 ovn_metadata_agent[106654]:    bind 169.254.169.254:80
Dec  1 09:44:22 compute-0 ovn_metadata_agent[106654]:    server metadata /var/lib/neutron/metadata_proxy
Dec  1 09:44:22 compute-0 ovn_metadata_agent[106654]:    http-request add-header X-OVN-Network-ID 4f3e9b63-cba6-412e-ba07-d66a8b38af02
Dec  1 09:44:22 compute-0 ovn_metadata_agent[106654]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  1 09:44:22 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:44:22.539 106659 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4f3e9b63-cba6-412e-ba07-d66a8b38af02', 'env', 'PROCESS_TAG=haproxy-4f3e9b63-cba6-412e-ba07-d66a8b38af02', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4f3e9b63-cba6-412e-ba07-d66a8b38af02.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  1 09:44:22 compute-0 nova_compute[189491]: 2025-12-01 09:44:22.547 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:44:22 compute-0 nova_compute[189491]: 2025-12-01 09:44:22.803 189495 DEBUG nova.virt.driver [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] Emitting event <LifecycleEvent: 1764582262.8022077, 70f48496-14bd-4e6f-8706-262d8e6b9510 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 09:44:22 compute-0 nova_compute[189491]: 2025-12-01 09:44:22.804 189495 INFO nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: 70f48496-14bd-4e6f-8706-262d8e6b9510] VM Started (Lifecycle Event)#033[00m
Dec  1 09:44:22 compute-0 nova_compute[189491]: 2025-12-01 09:44:22.827 189495 DEBUG nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: 70f48496-14bd-4e6f-8706-262d8e6b9510] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 09:44:22 compute-0 nova_compute[189491]: 2025-12-01 09:44:22.835 189495 DEBUG nova.virt.driver [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] Emitting event <LifecycleEvent: 1764582262.802332, 70f48496-14bd-4e6f-8706-262d8e6b9510 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 09:44:22 compute-0 nova_compute[189491]: 2025-12-01 09:44:22.835 189495 INFO nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: 70f48496-14bd-4e6f-8706-262d8e6b9510] VM Paused (Lifecycle Event)#033[00m
Dec  1 09:44:22 compute-0 nova_compute[189491]: 2025-12-01 09:44:22.857 189495 DEBUG nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: 70f48496-14bd-4e6f-8706-262d8e6b9510] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 09:44:22 compute-0 nova_compute[189491]: 2025-12-01 09:44:22.863 189495 DEBUG nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: 70f48496-14bd-4e6f-8706-262d8e6b9510] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  1 09:44:22 compute-0 nova_compute[189491]: 2025-12-01 09:44:22.889 189495 INFO nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: 70f48496-14bd-4e6f-8706-262d8e6b9510] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  1 09:44:23 compute-0 podman[253131]: 2025-12-01 09:44:23.014133802 +0000 UTC m=+0.061383650 container create dcf6631e40eaa40eb9680472c1f7076f93e81d77eb3ac911827c176524361282 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4f3e9b63-cba6-412e-ba07-d66a8b38af02, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2)
Dec  1 09:44:23 compute-0 systemd[1]: Started libpod-conmon-dcf6631e40eaa40eb9680472c1f7076f93e81d77eb3ac911827c176524361282.scope.
Dec  1 09:44:23 compute-0 podman[253131]: 2025-12-01 09:44:22.983720379 +0000 UTC m=+0.030970257 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  1 09:44:23 compute-0 systemd[1]: Started libcrun container.
Dec  1 09:44:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b9d8db5ea65397090274cb271ee72eb5b09427fda6aef892f13909644fe44cd/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  1 09:44:23 compute-0 podman[253131]: 2025-12-01 09:44:23.132663325 +0000 UTC m=+0.179913203 container init dcf6631e40eaa40eb9680472c1f7076f93e81d77eb3ac911827c176524361282 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4f3e9b63-cba6-412e-ba07-d66a8b38af02, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_managed=true)
Dec  1 09:44:23 compute-0 nova_compute[189491]: 2025-12-01 09:44:23.133 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:44:23 compute-0 podman[253131]: 2025-12-01 09:44:23.143351907 +0000 UTC m=+0.190601765 container start dcf6631e40eaa40eb9680472c1f7076f93e81d77eb3ac911827c176524361282 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4f3e9b63-cba6-412e-ba07-d66a8b38af02, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3)
Dec  1 09:44:23 compute-0 neutron-haproxy-ovnmeta-4f3e9b63-cba6-412e-ba07-d66a8b38af02[253147]: [NOTICE]   (253151) : New worker (253153) forked
Dec  1 09:44:23 compute-0 neutron-haproxy-ovnmeta-4f3e9b63-cba6-412e-ba07-d66a8b38af02[253147]: [NOTICE]   (253151) : Loading success.
Dec  1 09:44:23 compute-0 nova_compute[189491]: 2025-12-01 09:44:23.956 189495 DEBUG nova.network.neutron [req-daab4dbf-2b02-426c-8927-ff48d5dca1ca req-3327b052-aae4-4c7c-9f76-16a896fb4a9d ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 70f48496-14bd-4e6f-8706-262d8e6b9510] Updated VIF entry in instance network info cache for port 9ba63f14-2eaa-45bf-8c16-59bd3a7893de. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  1 09:44:23 compute-0 nova_compute[189491]: 2025-12-01 09:44:23.958 189495 DEBUG nova.network.neutron [req-daab4dbf-2b02-426c-8927-ff48d5dca1ca req-3327b052-aae4-4c7c-9f76-16a896fb4a9d ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 70f48496-14bd-4e6f-8706-262d8e6b9510] Updating instance_info_cache with network_info: [{"id": "9ba63f14-2eaa-45bf-8c16-59bd3a7893de", "address": "fa:16:3e:06:a3:58", "network": {"id": "4f3e9b63-cba6-412e-ba07-d66a8b38af02", "bridge": "br-int", "label": "tempest-network-smoke--1085714181", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee60ff0d117e468aa42c7d39022568ea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ba63f14-2e", "ovs_interfaceid": "9ba63f14-2eaa-45bf-8c16-59bd3a7893de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 09:44:24 compute-0 nova_compute[189491]: 2025-12-01 09:44:24.143 189495 DEBUG oslo_concurrency.lockutils [req-daab4dbf-2b02-426c-8927-ff48d5dca1ca req-3327b052-aae4-4c7c-9f76-16a896fb4a9d ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Releasing lock "refresh_cache-70f48496-14bd-4e6f-8706-262d8e6b9510" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 09:44:25 compute-0 nova_compute[189491]: 2025-12-01 09:44:25.113 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:44:25 compute-0 nova_compute[189491]: 2025-12-01 09:44:25.930 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:44:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:44:26.536 106659 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:44:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:44:26.536 106659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:44:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:44:26.537 106659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:44:28 compute-0 nova_compute[189491]: 2025-12-01 09:44:28.136 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:44:28 compute-0 nova_compute[189491]: 2025-12-01 09:44:28.526 189495 DEBUG oslo_concurrency.lockutils [None req-b9d9aa10-e3f0-404a-9b6b-6feb1ee02a36 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] Acquiring lock "b5a25e93-8e59-4459-a45e-2d1d2d486bbc" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:44:28 compute-0 nova_compute[189491]: 2025-12-01 09:44:28.526 189495 DEBUG oslo_concurrency.lockutils [None req-b9d9aa10-e3f0-404a-9b6b-6feb1ee02a36 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] Lock "b5a25e93-8e59-4459-a45e-2d1d2d486bbc" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:44:28 compute-0 nova_compute[189491]: 2025-12-01 09:44:28.528 189495 INFO nova.compute.manager [None req-b9d9aa10-e3f0-404a-9b6b-6feb1ee02a36 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] [instance: b5a25e93-8e59-4459-a45e-2d1d2d486bbc] Rebooting instance#033[00m
Dec  1 09:44:28 compute-0 nova_compute[189491]: 2025-12-01 09:44:28.544 189495 DEBUG oslo_concurrency.lockutils [None req-b9d9aa10-e3f0-404a-9b6b-6feb1ee02a36 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] Acquiring lock "refresh_cache-b5a25e93-8e59-4459-a45e-2d1d2d486bbc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 09:44:28 compute-0 nova_compute[189491]: 2025-12-01 09:44:28.545 189495 DEBUG oslo_concurrency.lockutils [None req-b9d9aa10-e3f0-404a-9b6b-6feb1ee02a36 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] Acquired lock "refresh_cache-b5a25e93-8e59-4459-a45e-2d1d2d486bbc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 09:44:28 compute-0 nova_compute[189491]: 2025-12-01 09:44:28.545 189495 DEBUG nova.network.neutron [None req-b9d9aa10-e3f0-404a-9b6b-6feb1ee02a36 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] [instance: b5a25e93-8e59-4459-a45e-2d1d2d486bbc] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  1 09:44:29 compute-0 nova_compute[189491]: 2025-12-01 09:44:29.263 189495 DEBUG nova.compute.manager [req-aa0e833b-e64d-41df-9876-dbf4d511e54c req-1a9f414c-7096-4201-b6a8-fddb2c349d3e ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 70f48496-14bd-4e6f-8706-262d8e6b9510] Received event network-vif-plugged-9ba63f14-2eaa-45bf-8c16-59bd3a7893de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 09:44:29 compute-0 nova_compute[189491]: 2025-12-01 09:44:29.264 189495 DEBUG oslo_concurrency.lockutils [req-aa0e833b-e64d-41df-9876-dbf4d511e54c req-1a9f414c-7096-4201-b6a8-fddb2c349d3e ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Acquiring lock "70f48496-14bd-4e6f-8706-262d8e6b9510-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:44:29 compute-0 nova_compute[189491]: 2025-12-01 09:44:29.265 189495 DEBUG oslo_concurrency.lockutils [req-aa0e833b-e64d-41df-9876-dbf4d511e54c req-1a9f414c-7096-4201-b6a8-fddb2c349d3e ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Lock "70f48496-14bd-4e6f-8706-262d8e6b9510-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:44:29 compute-0 nova_compute[189491]: 2025-12-01 09:44:29.266 189495 DEBUG oslo_concurrency.lockutils [req-aa0e833b-e64d-41df-9876-dbf4d511e54c req-1a9f414c-7096-4201-b6a8-fddb2c349d3e ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Lock "70f48496-14bd-4e6f-8706-262d8e6b9510-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:44:29 compute-0 nova_compute[189491]: 2025-12-01 09:44:29.266 189495 DEBUG nova.compute.manager [req-aa0e833b-e64d-41df-9876-dbf4d511e54c req-1a9f414c-7096-4201-b6a8-fddb2c349d3e ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 70f48496-14bd-4e6f-8706-262d8e6b9510] Processing event network-vif-plugged-9ba63f14-2eaa-45bf-8c16-59bd3a7893de _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  1 09:44:29 compute-0 nova_compute[189491]: 2025-12-01 09:44:29.267 189495 DEBUG nova.compute.manager [None req-190f2312-458e-447d-add1-a848d4f78c4e 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] [instance: 70f48496-14bd-4e6f-8706-262d8e6b9510] Instance event wait completed in 6 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  1 09:44:29 compute-0 nova_compute[189491]: 2025-12-01 09:44:29.275 189495 DEBUG nova.virt.driver [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] Emitting event <LifecycleEvent: 1764582269.274711, 70f48496-14bd-4e6f-8706-262d8e6b9510 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 09:44:29 compute-0 nova_compute[189491]: 2025-12-01 09:44:29.275 189495 INFO nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: 70f48496-14bd-4e6f-8706-262d8e6b9510] VM Resumed (Lifecycle Event)#033[00m
Dec  1 09:44:29 compute-0 nova_compute[189491]: 2025-12-01 09:44:29.279 189495 DEBUG nova.virt.libvirt.driver [None req-190f2312-458e-447d-add1-a848d4f78c4e 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] [instance: 70f48496-14bd-4e6f-8706-262d8e6b9510] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  1 09:44:29 compute-0 nova_compute[189491]: 2025-12-01 09:44:29.289 189495 INFO nova.virt.libvirt.driver [-] [instance: 70f48496-14bd-4e6f-8706-262d8e6b9510] Instance spawned successfully.#033[00m
Dec  1 09:44:29 compute-0 nova_compute[189491]: 2025-12-01 09:44:29.290 189495 DEBUG nova.virt.libvirt.driver [None req-190f2312-458e-447d-add1-a848d4f78c4e 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] [instance: 70f48496-14bd-4e6f-8706-262d8e6b9510] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  1 09:44:29 compute-0 nova_compute[189491]: 2025-12-01 09:44:29.314 189495 DEBUG nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: 70f48496-14bd-4e6f-8706-262d8e6b9510] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 09:44:29 compute-0 nova_compute[189491]: 2025-12-01 09:44:29.327 189495 DEBUG nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: 70f48496-14bd-4e6f-8706-262d8e6b9510] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  1 09:44:29 compute-0 nova_compute[189491]: 2025-12-01 09:44:29.335 189495 DEBUG nova.virt.libvirt.driver [None req-190f2312-458e-447d-add1-a848d4f78c4e 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] [instance: 70f48496-14bd-4e6f-8706-262d8e6b9510] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 09:44:29 compute-0 nova_compute[189491]: 2025-12-01 09:44:29.335 189495 DEBUG nova.virt.libvirt.driver [None req-190f2312-458e-447d-add1-a848d4f78c4e 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] [instance: 70f48496-14bd-4e6f-8706-262d8e6b9510] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 09:44:29 compute-0 nova_compute[189491]: 2025-12-01 09:44:29.336 189495 DEBUG nova.virt.libvirt.driver [None req-190f2312-458e-447d-add1-a848d4f78c4e 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] [instance: 70f48496-14bd-4e6f-8706-262d8e6b9510] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 09:44:29 compute-0 nova_compute[189491]: 2025-12-01 09:44:29.336 189495 DEBUG nova.virt.libvirt.driver [None req-190f2312-458e-447d-add1-a848d4f78c4e 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] [instance: 70f48496-14bd-4e6f-8706-262d8e6b9510] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 09:44:29 compute-0 nova_compute[189491]: 2025-12-01 09:44:29.336 189495 DEBUG nova.virt.libvirt.driver [None req-190f2312-458e-447d-add1-a848d4f78c4e 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] [instance: 70f48496-14bd-4e6f-8706-262d8e6b9510] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 09:44:29 compute-0 nova_compute[189491]: 2025-12-01 09:44:29.337 189495 DEBUG nova.virt.libvirt.driver [None req-190f2312-458e-447d-add1-a848d4f78c4e 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] [instance: 70f48496-14bd-4e6f-8706-262d8e6b9510] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 09:44:29 compute-0 nova_compute[189491]: 2025-12-01 09:44:29.367 189495 INFO nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: 70f48496-14bd-4e6f-8706-262d8e6b9510] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  1 09:44:29 compute-0 nova_compute[189491]: 2025-12-01 09:44:29.419 189495 INFO nova.compute.manager [None req-190f2312-458e-447d-add1-a848d4f78c4e 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] [instance: 70f48496-14bd-4e6f-8706-262d8e6b9510] Took 15.26 seconds to spawn the instance on the hypervisor.#033[00m
Dec  1 09:44:29 compute-0 nova_compute[189491]: 2025-12-01 09:44:29.419 189495 DEBUG nova.compute.manager [None req-190f2312-458e-447d-add1-a848d4f78c4e 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] [instance: 70f48496-14bd-4e6f-8706-262d8e6b9510] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 09:44:29 compute-0 nova_compute[189491]: 2025-12-01 09:44:29.496 189495 INFO nova.compute.manager [None req-190f2312-458e-447d-add1-a848d4f78c4e 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] [instance: 70f48496-14bd-4e6f-8706-262d8e6b9510] Took 16.09 seconds to build instance.#033[00m
Dec  1 09:44:29 compute-0 nova_compute[189491]: 2025-12-01 09:44:29.516 189495 DEBUG oslo_concurrency.lockutils [None req-190f2312-458e-447d-add1-a848d4f78c4e 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Lock "70f48496-14bd-4e6f-8706-262d8e6b9510" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.335s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:44:29 compute-0 podman[253162]: 2025-12-01 09:44:29.72862562 +0000 UTC m=+0.103852357 container health_status e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec  1 09:44:29 compute-0 podman[203700]: time="2025-12-01T09:44:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 09:44:29 compute-0 podman[203700]: @ - - [01/Dec/2025:09:44:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 30757 "" "Go-http-client/1.1"
Dec  1 09:44:29 compute-0 podman[203700]: @ - - [01/Dec/2025:09:44:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5269 "" "Go-http-client/1.1"
Dec  1 09:44:29 compute-0 nova_compute[189491]: 2025-12-01 09:44:29.894 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:44:30 compute-0 nova_compute[189491]: 2025-12-01 09:44:30.116 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:44:30 compute-0 nova_compute[189491]: 2025-12-01 09:44:30.941 189495 DEBUG nova.network.neutron [None req-b9d9aa10-e3f0-404a-9b6b-6feb1ee02a36 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] [instance: b5a25e93-8e59-4459-a45e-2d1d2d486bbc] Updating instance_info_cache with network_info: [{"id": "9dc75317-7a9b-4763-9189-4ea68bfc3ccb", "address": "fa:16:3e:81:32:12", "network": {"id": "528d6fcc-4f6c-4000-b20b-6a6d9f6135ea", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1736415669-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a5fc8e7c1a854418b0a110cc22e69de0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9dc75317-7a", "ovs_interfaceid": "9dc75317-7a9b-4763-9189-4ea68bfc3ccb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 09:44:30 compute-0 nova_compute[189491]: 2025-12-01 09:44:30.970 189495 DEBUG oslo_concurrency.lockutils [None req-b9d9aa10-e3f0-404a-9b6b-6feb1ee02a36 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] Releasing lock "refresh_cache-b5a25e93-8e59-4459-a45e-2d1d2d486bbc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 09:44:30 compute-0 nova_compute[189491]: 2025-12-01 09:44:30.971 189495 DEBUG nova.compute.manager [None req-b9d9aa10-e3f0-404a-9b6b-6feb1ee02a36 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] [instance: b5a25e93-8e59-4459-a45e-2d1d2d486bbc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 09:44:31 compute-0 kernel: tap9dc75317-7a (unregistering): left promiscuous mode
Dec  1 09:44:31 compute-0 NetworkManager[56318]: <info>  [1764582271.1330] device (tap9dc75317-7a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  1 09:44:31 compute-0 nova_compute[189491]: 2025-12-01 09:44:31.146 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:44:31 compute-0 ovn_controller[97794]: 2025-12-01T09:44:31Z|00102|binding|INFO|Releasing lport 9dc75317-7a9b-4763-9189-4ea68bfc3ccb from this chassis (sb_readonly=0)
Dec  1 09:44:31 compute-0 ovn_controller[97794]: 2025-12-01T09:44:31Z|00103|binding|INFO|Setting lport 9dc75317-7a9b-4763-9189-4ea68bfc3ccb down in Southbound
Dec  1 09:44:31 compute-0 ovn_controller[97794]: 2025-12-01T09:44:31Z|00104|binding|INFO|Removing iface tap9dc75317-7a ovn-installed in OVS
Dec  1 09:44:31 compute-0 nova_compute[189491]: 2025-12-01 09:44:31.155 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:44:31 compute-0 nova_compute[189491]: 2025-12-01 09:44:31.164 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:44:31 compute-0 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000008.scope: Deactivated successfully.
Dec  1 09:44:31 compute-0 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000008.scope: Consumed 43.547s CPU time.
Dec  1 09:44:31 compute-0 systemd-machined[155812]: Machine qemu-8-instance-00000008 terminated.
Dec  1 09:44:31 compute-0 podman[253182]: 2025-12-01 09:44:31.243561373 +0000 UTC m=+0.090270755 container health_status dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  1 09:44:31 compute-0 podman[253185]: 2025-12-01 09:44:31.248704079 +0000 UTC m=+0.091603309 container health_status f2d8639519b361376780abb83cb5a5e341c4f412720fb5852b2a4d261a69c359 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, container_name=kepler, release-0.7.12=, version=9.4, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.openshift.tags=base rhel9, managed_by=edpm_ansible, vcs-type=git, build-date=2024-09-18T21:23:30, config_id=edpm, io.buildah.version=1.29.0, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543, com.redhat.component=ubi9-container)
Dec  1 09:44:31 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:44:31.347 106659 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:81:32:12 10.100.0.14'], port_security=['fa:16:3e:81:32:12 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'b5a25e93-8e59-4459-a45e-2d1d2d486bbc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-528d6fcc-4f6c-4000-b20b-6a6d9f6135ea', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a5fc8e7c1a854418b0a110cc22e69de0', 'neutron:revision_number': '4', 'neutron:security_group_ids': '72afbc16-616c-4679-8b1b-dcb1251c5132', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.190'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3074f1d2-6f44-4fa9-90f3-bc6399575f2a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff7f12c3670>], logical_port=9dc75317-7a9b-4763-9189-4ea68bfc3ccb) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff7f12c3670>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 09:44:31 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:44:31.348 106659 INFO neutron.agent.ovn.metadata.agent [-] Port 9dc75317-7a9b-4763-9189-4ea68bfc3ccb in datapath 528d6fcc-4f6c-4000-b20b-6a6d9f6135ea unbound from our chassis#033[00m
Dec  1 09:44:31 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:44:31.350 106659 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 528d6fcc-4f6c-4000-b20b-6a6d9f6135ea, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  1 09:44:31 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:44:31.351 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[faca0411-e75e-44cd-86ed-80d03d6b982d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:44:31 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:44:31.352 106659 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-528d6fcc-4f6c-4000-b20b-6a6d9f6135ea namespace which is not needed anymore#033[00m
Dec  1 09:44:31 compute-0 nova_compute[189491]: 2025-12-01 09:44:31.373 189495 INFO nova.virt.libvirt.driver [-] [instance: b5a25e93-8e59-4459-a45e-2d1d2d486bbc] Instance destroyed successfully.#033[00m
Dec  1 09:44:31 compute-0 nova_compute[189491]: 2025-12-01 09:44:31.374 189495 DEBUG nova.objects.instance [None req-b9d9aa10-e3f0-404a-9b6b-6feb1ee02a36 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] Lazy-loading 'resources' on Instance uuid b5a25e93-8e59-4459-a45e-2d1d2d486bbc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 09:44:31 compute-0 openstack_network_exporter[205866]: ERROR   09:44:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:44:31 compute-0 openstack_network_exporter[205866]: ERROR   09:44:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:44:31 compute-0 openstack_network_exporter[205866]: ERROR   09:44:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 09:44:31 compute-0 openstack_network_exporter[205866]: ERROR   09:44:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 09:44:31 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:44:31 compute-0 openstack_network_exporter[205866]: ERROR   09:44:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 09:44:31 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:44:31 compute-0 nova_compute[189491]: 2025-12-01 09:44:31.455 189495 DEBUG nova.virt.libvirt.vif [None req-b9d9aa10-e3f0-404a-9b6b-6feb1ee02a36 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-01T09:43:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-2131740452',display_name='tempest-ServerActionsTestJSON-server-2131740452',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-2131740452',id=8,image_ref='7ddeffd1-d06f-4a46-9e41-114974daa90e',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCQFUVYl1Xqq2gIQN4/eCJ8cnpGKeD2gZ7u/gkHTzBRwJJoku8v2NGbkC1lQIa8TB9NaZUcsSyfv1koauiYvXUFGYORBUpCcLDSn5ClA7+eTQ5bJXZBZqJiWDZmhR8SgRA==',key_name='tempest-keypair-1047797503',keypairs=<?>,launch_index=0,launched_at=2025-12-01T09:43:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a5fc8e7c1a854418b0a110cc22e69de0',ramdisk_id='',reservation_id='r-k3gqld7r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7ddeffd1-d06f-4a46-9e41-114974daa90e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-253829526',owner_user_name='tempest-ServerActionsTestJSON-253829526-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-01T09:44:31Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7f215f81d0ab4d1fb34e21bf69e390fe',uuid=b5a25e93-8e59-4459-a45e-2d1d2d486bbc,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9dc75317-7a9b-4763-9189-4ea68bfc3ccb", "address": "fa:16:3e:81:32:12", "network": {"id": "528d6fcc-4f6c-4000-b20b-6a6d9f6135ea", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1736415669-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a5fc8e7c1a854418b0a110cc22e69de0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9dc75317-7a", "ovs_interfaceid": "9dc75317-7a9b-4763-9189-4ea68bfc3ccb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  1 09:44:31 compute-0 nova_compute[189491]: 2025-12-01 09:44:31.456 189495 DEBUG nova.network.os_vif_util [None req-b9d9aa10-e3f0-404a-9b6b-6feb1ee02a36 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] Converting VIF {"id": "9dc75317-7a9b-4763-9189-4ea68bfc3ccb", "address": "fa:16:3e:81:32:12", "network": {"id": "528d6fcc-4f6c-4000-b20b-6a6d9f6135ea", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1736415669-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a5fc8e7c1a854418b0a110cc22e69de0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9dc75317-7a", "ovs_interfaceid": "9dc75317-7a9b-4763-9189-4ea68bfc3ccb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 09:44:31 compute-0 nova_compute[189491]: 2025-12-01 09:44:31.458 189495 DEBUG nova.network.os_vif_util [None req-b9d9aa10-e3f0-404a-9b6b-6feb1ee02a36 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:81:32:12,bridge_name='br-int',has_traffic_filtering=True,id=9dc75317-7a9b-4763-9189-4ea68bfc3ccb,network=Network(528d6fcc-4f6c-4000-b20b-6a6d9f6135ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9dc75317-7a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 09:44:31 compute-0 nova_compute[189491]: 2025-12-01 09:44:31.465 189495 DEBUG os_vif [None req-b9d9aa10-e3f0-404a-9b6b-6feb1ee02a36 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:81:32:12,bridge_name='br-int',has_traffic_filtering=True,id=9dc75317-7a9b-4763-9189-4ea68bfc3ccb,network=Network(528d6fcc-4f6c-4000-b20b-6a6d9f6135ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9dc75317-7a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  1 09:44:31 compute-0 nova_compute[189491]: 2025-12-01 09:44:31.468 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:44:31 compute-0 nova_compute[189491]: 2025-12-01 09:44:31.468 189495 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9dc75317-7a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:44:31 compute-0 nova_compute[189491]: 2025-12-01 09:44:31.470 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:44:31 compute-0 nova_compute[189491]: 2025-12-01 09:44:31.473 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  1 09:44:31 compute-0 nova_compute[189491]: 2025-12-01 09:44:31.476 189495 INFO os_vif [None req-b9d9aa10-e3f0-404a-9b6b-6feb1ee02a36 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:81:32:12,bridge_name='br-int',has_traffic_filtering=True,id=9dc75317-7a9b-4763-9189-4ea68bfc3ccb,network=Network(528d6fcc-4f6c-4000-b20b-6a6d9f6135ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9dc75317-7a')#033[00m
Dec  1 09:44:31 compute-0 nova_compute[189491]: 2025-12-01 09:44:31.485 189495 DEBUG nova.virt.libvirt.driver [None req-b9d9aa10-e3f0-404a-9b6b-6feb1ee02a36 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] [instance: b5a25e93-8e59-4459-a45e-2d1d2d486bbc] Start _get_guest_xml network_info=[{"id": "9dc75317-7a9b-4763-9189-4ea68bfc3ccb", "address": "fa:16:3e:81:32:12", "network": {"id": "528d6fcc-4f6c-4000-b20b-6a6d9f6135ea", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1736415669-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a5fc8e7c1a854418b0a110cc22e69de0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9dc75317-7a", "ovs_interfaceid": "9dc75317-7a9b-4763-9189-4ea68bfc3ccb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=7ddeffd1-d06f-4a46-9e41-114974daa90e,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'guest_format': None, 'encryption_options': None, 'device_type': 'disk', 'encryption_secret_uuid': None, 'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'image_id': '7ddeffd1-d06f-4a46-9e41-114974daa90e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  1 09:44:31 compute-0 nova_compute[189491]: 2025-12-01 09:44:31.490 189495 WARNING nova.virt.libvirt.driver [None req-b9d9aa10-e3f0-404a-9b6b-6feb1ee02a36 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 09:44:31 compute-0 nova_compute[189491]: 2025-12-01 09:44:31.497 189495 DEBUG nova.virt.libvirt.host [None req-b9d9aa10-e3f0-404a-9b6b-6feb1ee02a36 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  1 09:44:31 compute-0 nova_compute[189491]: 2025-12-01 09:44:31.497 189495 DEBUG nova.virt.libvirt.host [None req-b9d9aa10-e3f0-404a-9b6b-6feb1ee02a36 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  1 09:44:31 compute-0 nova_compute[189491]: 2025-12-01 09:44:31.502 189495 DEBUG nova.virt.libvirt.host [None req-b9d9aa10-e3f0-404a-9b6b-6feb1ee02a36 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  1 09:44:31 compute-0 nova_compute[189491]: 2025-12-01 09:44:31.502 189495 DEBUG nova.virt.libvirt.host [None req-b9d9aa10-e3f0-404a-9b6b-6feb1ee02a36 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  1 09:44:31 compute-0 nova_compute[189491]: 2025-12-01 09:44:31.503 189495 DEBUG nova.virt.libvirt.driver [None req-b9d9aa10-e3f0-404a-9b6b-6feb1ee02a36 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  1 09:44:31 compute-0 nova_compute[189491]: 2025-12-01 09:44:31.503 189495 DEBUG nova.virt.hardware [None req-b9d9aa10-e3f0-404a-9b6b-6feb1ee02a36 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-01T09:41:32Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='422f041c-a187-4aa2-8167-37f3eb0e89c2',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=7ddeffd1-d06f-4a46-9e41-114974daa90e,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  1 09:44:31 compute-0 nova_compute[189491]: 2025-12-01 09:44:31.503 189495 DEBUG nova.virt.hardware [None req-b9d9aa10-e3f0-404a-9b6b-6feb1ee02a36 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  1 09:44:31 compute-0 nova_compute[189491]: 2025-12-01 09:44:31.503 189495 DEBUG nova.virt.hardware [None req-b9d9aa10-e3f0-404a-9b6b-6feb1ee02a36 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  1 09:44:31 compute-0 nova_compute[189491]: 2025-12-01 09:44:31.504 189495 DEBUG nova.virt.hardware [None req-b9d9aa10-e3f0-404a-9b6b-6feb1ee02a36 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  1 09:44:31 compute-0 nova_compute[189491]: 2025-12-01 09:44:31.504 189495 DEBUG nova.virt.hardware [None req-b9d9aa10-e3f0-404a-9b6b-6feb1ee02a36 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  1 09:44:31 compute-0 nova_compute[189491]: 2025-12-01 09:44:31.504 189495 DEBUG nova.virt.hardware [None req-b9d9aa10-e3f0-404a-9b6b-6feb1ee02a36 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  1 09:44:31 compute-0 nova_compute[189491]: 2025-12-01 09:44:31.504 189495 DEBUG nova.virt.hardware [None req-b9d9aa10-e3f0-404a-9b6b-6feb1ee02a36 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  1 09:44:31 compute-0 nova_compute[189491]: 2025-12-01 09:44:31.504 189495 DEBUG nova.virt.hardware [None req-b9d9aa10-e3f0-404a-9b6b-6feb1ee02a36 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  1 09:44:31 compute-0 nova_compute[189491]: 2025-12-01 09:44:31.504 189495 DEBUG nova.virt.hardware [None req-b9d9aa10-e3f0-404a-9b6b-6feb1ee02a36 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  1 09:44:31 compute-0 nova_compute[189491]: 2025-12-01 09:44:31.505 189495 DEBUG nova.virt.hardware [None req-b9d9aa10-e3f0-404a-9b6b-6feb1ee02a36 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  1 09:44:31 compute-0 nova_compute[189491]: 2025-12-01 09:44:31.505 189495 DEBUG nova.virt.hardware [None req-b9d9aa10-e3f0-404a-9b6b-6feb1ee02a36 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  1 09:44:31 compute-0 nova_compute[189491]: 2025-12-01 09:44:31.505 189495 DEBUG nova.objects.instance [None req-b9d9aa10-e3f0-404a-9b6b-6feb1ee02a36 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] Lazy-loading 'vcpu_model' on Instance uuid b5a25e93-8e59-4459-a45e-2d1d2d486bbc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 09:44:31 compute-0 nova_compute[189491]: 2025-12-01 09:44:31.535 189495 DEBUG oslo_concurrency.processutils [None req-b9d9aa10-e3f0-404a-9b6b-6feb1ee02a36 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5a25e93-8e59-4459-a45e-2d1d2d486bbc/disk.config --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:44:31 compute-0 nova_compute[189491]: 2025-12-01 09:44:31.598 189495 DEBUG oslo_concurrency.processutils [None req-b9d9aa10-e3f0-404a-9b6b-6feb1ee02a36 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5a25e93-8e59-4459-a45e-2d1d2d486bbc/disk.config --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:44:31 compute-0 nova_compute[189491]: 2025-12-01 09:44:31.600 189495 DEBUG oslo_concurrency.lockutils [None req-b9d9aa10-e3f0-404a-9b6b-6feb1ee02a36 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] Acquiring lock "/var/lib/nova/instances/b5a25e93-8e59-4459-a45e-2d1d2d486bbc/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:44:31 compute-0 nova_compute[189491]: 2025-12-01 09:44:31.600 189495 DEBUG oslo_concurrency.lockutils [None req-b9d9aa10-e3f0-404a-9b6b-6feb1ee02a36 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] Lock "/var/lib/nova/instances/b5a25e93-8e59-4459-a45e-2d1d2d486bbc/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:44:31 compute-0 nova_compute[189491]: 2025-12-01 09:44:31.601 189495 DEBUG oslo_concurrency.lockutils [None req-b9d9aa10-e3f0-404a-9b6b-6feb1ee02a36 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] Lock "/var/lib/nova/instances/b5a25e93-8e59-4459-a45e-2d1d2d486bbc/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:44:31 compute-0 nova_compute[189491]: 2025-12-01 09:44:31.602 189495 DEBUG nova.virt.libvirt.vif [None req-b9d9aa10-e3f0-404a-9b6b-6feb1ee02a36 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-01T09:43:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-2131740452',display_name='tempest-ServerActionsTestJSON-server-2131740452',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-2131740452',id=8,image_ref='7ddeffd1-d06f-4a46-9e41-114974daa90e',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCQFUVYl1Xqq2gIQN4/eCJ8cnpGKeD2gZ7u/gkHTzBRwJJoku8v2NGbkC1lQIa8TB9NaZUcsSyfv1koauiYvXUFGYORBUpCcLDSn5ClA7+eTQ5bJXZBZqJiWDZmhR8SgRA==',key_name='tempest-keypair-1047797503',keypairs=<?>,launch_index=0,launched_at=2025-12-01T09:43:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a5fc8e7c1a854418b0a110cc22e69de0',ramdisk_id='',reservation_id='r-k3gqld7r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7ddeffd1-d06f-4a46-9e41-114974daa90e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-253829526',owner_user_name='tempest-ServerActionsTestJSON-253829526-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-01T09:44:31Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7f215f81d0ab4d1fb34e21bf69e390fe',uuid=b5a25e93-8e59-4459-a45e-2d1d2d486bbc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9dc75317-7a9b-4763-9189-4ea68bfc3ccb", "address": "fa:16:3e:81:32:12", "network": {"id": "528d6fcc-4f6c-4000-b20b-6a6d9f6135ea", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1736415669-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a5fc8e7c1a854418b0a110cc22e69de0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9dc75317-7a", "ovs_interfaceid": "9dc75317-7a9b-4763-9189-4ea68bfc3ccb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  1 09:44:31 compute-0 nova_compute[189491]: 2025-12-01 09:44:31.603 189495 DEBUG nova.network.os_vif_util [None req-b9d9aa10-e3f0-404a-9b6b-6feb1ee02a36 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] Converting VIF {"id": "9dc75317-7a9b-4763-9189-4ea68bfc3ccb", "address": "fa:16:3e:81:32:12", "network": {"id": "528d6fcc-4f6c-4000-b20b-6a6d9f6135ea", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1736415669-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a5fc8e7c1a854418b0a110cc22e69de0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9dc75317-7a", "ovs_interfaceid": "9dc75317-7a9b-4763-9189-4ea68bfc3ccb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 09:44:31 compute-0 nova_compute[189491]: 2025-12-01 09:44:31.604 189495 DEBUG nova.network.os_vif_util [None req-b9d9aa10-e3f0-404a-9b6b-6feb1ee02a36 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:81:32:12,bridge_name='br-int',has_traffic_filtering=True,id=9dc75317-7a9b-4763-9189-4ea68bfc3ccb,network=Network(528d6fcc-4f6c-4000-b20b-6a6d9f6135ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9dc75317-7a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 09:44:31 compute-0 nova_compute[189491]: 2025-12-01 09:44:31.605 189495 DEBUG nova.objects.instance [None req-b9d9aa10-e3f0-404a-9b6b-6feb1ee02a36 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] Lazy-loading 'pci_devices' on Instance uuid b5a25e93-8e59-4459-a45e-2d1d2d486bbc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 09:44:31 compute-0 nova_compute[189491]: 2025-12-01 09:44:31.636 189495 DEBUG nova.compute.manager [req-5ced8bd0-2988-4b3b-a1e4-e07d377adc66 req-2011fa72-9c84-4776-9ee8-19a0e836639e ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 70f48496-14bd-4e6f-8706-262d8e6b9510] Received event network-vif-plugged-9ba63f14-2eaa-45bf-8c16-59bd3a7893de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 09:44:31 compute-0 nova_compute[189491]: 2025-12-01 09:44:31.636 189495 DEBUG oslo_concurrency.lockutils [req-5ced8bd0-2988-4b3b-a1e4-e07d377adc66 req-2011fa72-9c84-4776-9ee8-19a0e836639e ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Acquiring lock "70f48496-14bd-4e6f-8706-262d8e6b9510-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:44:31 compute-0 nova_compute[189491]: 2025-12-01 09:44:31.637 189495 DEBUG oslo_concurrency.lockutils [req-5ced8bd0-2988-4b3b-a1e4-e07d377adc66 req-2011fa72-9c84-4776-9ee8-19a0e836639e ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Lock "70f48496-14bd-4e6f-8706-262d8e6b9510-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:44:31 compute-0 nova_compute[189491]: 2025-12-01 09:44:31.637 189495 DEBUG oslo_concurrency.lockutils [req-5ced8bd0-2988-4b3b-a1e4-e07d377adc66 req-2011fa72-9c84-4776-9ee8-19a0e836639e ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Lock "70f48496-14bd-4e6f-8706-262d8e6b9510-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:44:31 compute-0 nova_compute[189491]: 2025-12-01 09:44:31.637 189495 DEBUG nova.compute.manager [req-5ced8bd0-2988-4b3b-a1e4-e07d377adc66 req-2011fa72-9c84-4776-9ee8-19a0e836639e ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 70f48496-14bd-4e6f-8706-262d8e6b9510] No waiting events found dispatching network-vif-plugged-9ba63f14-2eaa-45bf-8c16-59bd3a7893de pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 09:44:31 compute-0 nova_compute[189491]: 2025-12-01 09:44:31.637 189495 WARNING nova.compute.manager [req-5ced8bd0-2988-4b3b-a1e4-e07d377adc66 req-2011fa72-9c84-4776-9ee8-19a0e836639e ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 70f48496-14bd-4e6f-8706-262d8e6b9510] Received unexpected event network-vif-plugged-9ba63f14-2eaa-45bf-8c16-59bd3a7893de for instance with vm_state active and task_state None.#033[00m
Dec  1 09:44:31 compute-0 neutron-haproxy-ovnmeta-528d6fcc-4f6c-4000-b20b-6a6d9f6135ea[252472]: [NOTICE]   (252476) : haproxy version is 2.8.14-c23fe91
Dec  1 09:44:31 compute-0 neutron-haproxy-ovnmeta-528d6fcc-4f6c-4000-b20b-6a6d9f6135ea[252472]: [NOTICE]   (252476) : path to executable is /usr/sbin/haproxy
Dec  1 09:44:31 compute-0 neutron-haproxy-ovnmeta-528d6fcc-4f6c-4000-b20b-6a6d9f6135ea[252472]: [WARNING]  (252476) : Exiting Master process...
Dec  1 09:44:31 compute-0 neutron-haproxy-ovnmeta-528d6fcc-4f6c-4000-b20b-6a6d9f6135ea[252472]: [WARNING]  (252476) : Exiting Master process...
Dec  1 09:44:31 compute-0 neutron-haproxy-ovnmeta-528d6fcc-4f6c-4000-b20b-6a6d9f6135ea[252472]: [ALERT]    (252476) : Current worker (252480) exited with code 143 (Terminated)
Dec  1 09:44:31 compute-0 neutron-haproxy-ovnmeta-528d6fcc-4f6c-4000-b20b-6a6d9f6135ea[252472]: [WARNING]  (252476) : All workers exited. Exiting... (0)
Dec  1 09:44:31 compute-0 systemd[1]: libpod-85801ec2ddaf3bf41f957ab27f0b434fef45631a0ec3ea69a8772f17bb2cea1c.scope: Deactivated successfully.
Dec  1 09:44:31 compute-0 nova_compute[189491]: 2025-12-01 09:44:31.651 189495 DEBUG nova.virt.libvirt.driver [None req-b9d9aa10-e3f0-404a-9b6b-6feb1ee02a36 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] [instance: b5a25e93-8e59-4459-a45e-2d1d2d486bbc] End _get_guest_xml xml=<domain type="kvm">
Dec  1 09:44:31 compute-0 nova_compute[189491]:  <uuid>b5a25e93-8e59-4459-a45e-2d1d2d486bbc</uuid>
Dec  1 09:44:31 compute-0 nova_compute[189491]:  <name>instance-00000008</name>
Dec  1 09:44:31 compute-0 nova_compute[189491]:  <memory>131072</memory>
Dec  1 09:44:31 compute-0 nova_compute[189491]:  <vcpu>1</vcpu>
Dec  1 09:44:31 compute-0 nova_compute[189491]:  <metadata>
Dec  1 09:44:31 compute-0 nova_compute[189491]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  1 09:44:31 compute-0 nova_compute[189491]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  1 09:44:31 compute-0 nova_compute[189491]:      <nova:name>tempest-ServerActionsTestJSON-server-2131740452</nova:name>
Dec  1 09:44:31 compute-0 nova_compute[189491]:      <nova:creationTime>2025-12-01 09:44:31</nova:creationTime>
Dec  1 09:44:31 compute-0 nova_compute[189491]:      <nova:flavor name="m1.nano">
Dec  1 09:44:31 compute-0 nova_compute[189491]:        <nova:memory>128</nova:memory>
Dec  1 09:44:31 compute-0 nova_compute[189491]:        <nova:disk>1</nova:disk>
Dec  1 09:44:31 compute-0 nova_compute[189491]:        <nova:swap>0</nova:swap>
Dec  1 09:44:31 compute-0 nova_compute[189491]:        <nova:ephemeral>0</nova:ephemeral>
Dec  1 09:44:31 compute-0 nova_compute[189491]:        <nova:vcpus>1</nova:vcpus>
Dec  1 09:44:31 compute-0 nova_compute[189491]:      </nova:flavor>
Dec  1 09:44:31 compute-0 nova_compute[189491]:      <nova:owner>
Dec  1 09:44:31 compute-0 nova_compute[189491]:        <nova:user uuid="7f215f81d0ab4d1fb34e21bf69e390fe">tempest-ServerActionsTestJSON-253829526-project-member</nova:user>
Dec  1 09:44:31 compute-0 nova_compute[189491]:        <nova:project uuid="a5fc8e7c1a854418b0a110cc22e69de0">tempest-ServerActionsTestJSON-253829526</nova:project>
Dec  1 09:44:31 compute-0 nova_compute[189491]:      </nova:owner>
Dec  1 09:44:31 compute-0 nova_compute[189491]:      <nova:root type="image" uuid="7ddeffd1-d06f-4a46-9e41-114974daa90e"/>
Dec  1 09:44:31 compute-0 nova_compute[189491]:      <nova:ports>
Dec  1 09:44:31 compute-0 nova_compute[189491]:        <nova:port uuid="9dc75317-7a9b-4763-9189-4ea68bfc3ccb">
Dec  1 09:44:31 compute-0 nova_compute[189491]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Dec  1 09:44:31 compute-0 nova_compute[189491]:        </nova:port>
Dec  1 09:44:31 compute-0 nova_compute[189491]:      </nova:ports>
Dec  1 09:44:31 compute-0 nova_compute[189491]:    </nova:instance>
Dec  1 09:44:31 compute-0 nova_compute[189491]:  </metadata>
Dec  1 09:44:31 compute-0 nova_compute[189491]:  <sysinfo type="smbios">
Dec  1 09:44:31 compute-0 nova_compute[189491]:    <system>
Dec  1 09:44:31 compute-0 nova_compute[189491]:      <entry name="manufacturer">RDO</entry>
Dec  1 09:44:31 compute-0 nova_compute[189491]:      <entry name="product">OpenStack Compute</entry>
Dec  1 09:44:31 compute-0 nova_compute[189491]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  1 09:44:31 compute-0 nova_compute[189491]:      <entry name="serial">b5a25e93-8e59-4459-a45e-2d1d2d486bbc</entry>
Dec  1 09:44:31 compute-0 nova_compute[189491]:      <entry name="uuid">b5a25e93-8e59-4459-a45e-2d1d2d486bbc</entry>
Dec  1 09:44:31 compute-0 nova_compute[189491]:      <entry name="family">Virtual Machine</entry>
Dec  1 09:44:31 compute-0 nova_compute[189491]:    </system>
Dec  1 09:44:31 compute-0 nova_compute[189491]:  </sysinfo>
Dec  1 09:44:31 compute-0 nova_compute[189491]:  <os>
Dec  1 09:44:31 compute-0 nova_compute[189491]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  1 09:44:31 compute-0 nova_compute[189491]:    <boot dev="hd"/>
Dec  1 09:44:31 compute-0 nova_compute[189491]:    <smbios mode="sysinfo"/>
Dec  1 09:44:31 compute-0 nova_compute[189491]:  </os>
Dec  1 09:44:31 compute-0 nova_compute[189491]:  <features>
Dec  1 09:44:31 compute-0 nova_compute[189491]:    <acpi/>
Dec  1 09:44:31 compute-0 nova_compute[189491]:    <apic/>
Dec  1 09:44:31 compute-0 nova_compute[189491]:    <vmcoreinfo/>
Dec  1 09:44:31 compute-0 nova_compute[189491]:  </features>
Dec  1 09:44:31 compute-0 nova_compute[189491]:  <clock offset="utc">
Dec  1 09:44:31 compute-0 nova_compute[189491]:    <timer name="pit" tickpolicy="delay"/>
Dec  1 09:44:31 compute-0 nova_compute[189491]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  1 09:44:31 compute-0 nova_compute[189491]:    <timer name="hpet" present="no"/>
Dec  1 09:44:31 compute-0 nova_compute[189491]:  </clock>
Dec  1 09:44:31 compute-0 nova_compute[189491]:  <cpu mode="host-model" match="exact">
Dec  1 09:44:31 compute-0 nova_compute[189491]:    <topology sockets="1" cores="1" threads="1"/>
Dec  1 09:44:31 compute-0 nova_compute[189491]:  </cpu>
Dec  1 09:44:31 compute-0 nova_compute[189491]:  <devices>
Dec  1 09:44:31 compute-0 nova_compute[189491]:    <disk type="file" device="disk">
Dec  1 09:44:31 compute-0 nova_compute[189491]:      <driver name="qemu" type="qcow2" cache="none"/>
Dec  1 09:44:31 compute-0 nova_compute[189491]:      <source file="/var/lib/nova/instances/b5a25e93-8e59-4459-a45e-2d1d2d486bbc/disk"/>
Dec  1 09:44:31 compute-0 nova_compute[189491]:      <target dev="vda" bus="virtio"/>
Dec  1 09:44:31 compute-0 nova_compute[189491]:    </disk>
Dec  1 09:44:31 compute-0 nova_compute[189491]:    <disk type="file" device="cdrom">
Dec  1 09:44:31 compute-0 nova_compute[189491]:      <driver name="qemu" type="raw" cache="none"/>
Dec  1 09:44:31 compute-0 nova_compute[189491]:      <source file="/var/lib/nova/instances/b5a25e93-8e59-4459-a45e-2d1d2d486bbc/disk.config"/>
Dec  1 09:44:31 compute-0 nova_compute[189491]:      <target dev="sda" bus="sata"/>
Dec  1 09:44:31 compute-0 nova_compute[189491]:    </disk>
Dec  1 09:44:31 compute-0 nova_compute[189491]:    <interface type="ethernet">
Dec  1 09:44:31 compute-0 nova_compute[189491]:      <mac address="fa:16:3e:81:32:12"/>
Dec  1 09:44:31 compute-0 nova_compute[189491]:      <model type="virtio"/>
Dec  1 09:44:31 compute-0 nova_compute[189491]:      <driver name="vhost" rx_queue_size="512"/>
Dec  1 09:44:31 compute-0 nova_compute[189491]:      <mtu size="1442"/>
Dec  1 09:44:31 compute-0 nova_compute[189491]:      <target dev="tap9dc75317-7a"/>
Dec  1 09:44:31 compute-0 nova_compute[189491]:    </interface>
Dec  1 09:44:31 compute-0 nova_compute[189491]:    <serial type="pty">
Dec  1 09:44:31 compute-0 nova_compute[189491]:      <log file="/var/lib/nova/instances/b5a25e93-8e59-4459-a45e-2d1d2d486bbc/console.log" append="off"/>
Dec  1 09:44:31 compute-0 nova_compute[189491]:    </serial>
Dec  1 09:44:31 compute-0 nova_compute[189491]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  1 09:44:31 compute-0 nova_compute[189491]:    <video>
Dec  1 09:44:31 compute-0 nova_compute[189491]:      <model type="virtio"/>
Dec  1 09:44:31 compute-0 nova_compute[189491]:    </video>
Dec  1 09:44:31 compute-0 nova_compute[189491]:    <input type="tablet" bus="usb"/>
Dec  1 09:44:31 compute-0 nova_compute[189491]:    <input type="keyboard" bus="usb"/>
Dec  1 09:44:31 compute-0 nova_compute[189491]:    <rng model="virtio">
Dec  1 09:44:31 compute-0 nova_compute[189491]:      <backend model="random">/dev/urandom</backend>
Dec  1 09:44:31 compute-0 nova_compute[189491]:    </rng>
Dec  1 09:44:31 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root"/>
Dec  1 09:44:31 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:44:31 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:44:31 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:44:31 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:44:31 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:44:31 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:44:31 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:44:31 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:44:31 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:44:31 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:44:31 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:44:31 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:44:31 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:44:31 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:44:31 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:44:31 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:44:31 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:44:31 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:44:31 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:44:31 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:44:31 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:44:31 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:44:31 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:44:31 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:44:31 compute-0 nova_compute[189491]:    <controller type="usb" index="0"/>
Dec  1 09:44:31 compute-0 nova_compute[189491]:    <memballoon model="virtio">
Dec  1 09:44:31 compute-0 nova_compute[189491]:      <stats period="10"/>
Dec  1 09:44:31 compute-0 nova_compute[189491]:    </memballoon>
Dec  1 09:44:31 compute-0 nova_compute[189491]:  </devices>
Dec  1 09:44:31 compute-0 nova_compute[189491]: </domain>
Dec  1 09:44:31 compute-0 nova_compute[189491]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  1 09:44:31 compute-0 nova_compute[189491]: 2025-12-01 09:44:31.652 189495 DEBUG oslo_concurrency.processutils [None req-b9d9aa10-e3f0-404a-9b6b-6feb1ee02a36 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5a25e93-8e59-4459-a45e-2d1d2d486bbc/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:44:31 compute-0 podman[253266]: 2025-12-01 09:44:31.657823529 +0000 UTC m=+0.168209258 container died 85801ec2ddaf3bf41f957ab27f0b434fef45631a0ec3ea69a8772f17bb2cea1c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-528d6fcc-4f6c-4000-b20b-6a6d9f6135ea, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125)
Dec  1 09:44:31 compute-0 nova_compute[189491]: 2025-12-01 09:44:31.725 189495 DEBUG oslo_concurrency.processutils [None req-b9d9aa10-e3f0-404a-9b6b-6feb1ee02a36 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5a25e93-8e59-4459-a45e-2d1d2d486bbc/disk --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:44:31 compute-0 nova_compute[189491]: 2025-12-01 09:44:31.727 189495 DEBUG oslo_concurrency.processutils [None req-b9d9aa10-e3f0-404a-9b6b-6feb1ee02a36 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5a25e93-8e59-4459-a45e-2d1d2d486bbc/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:44:31 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-85801ec2ddaf3bf41f957ab27f0b434fef45631a0ec3ea69a8772f17bb2cea1c-userdata-shm.mount: Deactivated successfully.
Dec  1 09:44:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-c0c3c361249835f8b3ea812ed9f69e217886cce34dfcd9b15355b850a38ad995-merged.mount: Deactivated successfully.
Dec  1 09:44:31 compute-0 nova_compute[189491]: 2025-12-01 09:44:31.797 189495 DEBUG oslo_concurrency.processutils [None req-b9d9aa10-e3f0-404a-9b6b-6feb1ee02a36 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5a25e93-8e59-4459-a45e-2d1d2d486bbc/disk --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:44:31 compute-0 nova_compute[189491]: 2025-12-01 09:44:31.799 189495 DEBUG nova.objects.instance [None req-b9d9aa10-e3f0-404a-9b6b-6feb1ee02a36 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] Lazy-loading 'trusted_certs' on Instance uuid b5a25e93-8e59-4459-a45e-2d1d2d486bbc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 09:44:31 compute-0 podman[253266]: 2025-12-01 09:44:31.837453135 +0000 UTC m=+0.347838864 container cleanup 85801ec2ddaf3bf41f957ab27f0b434fef45631a0ec3ea69a8772f17bb2cea1c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-528d6fcc-4f6c-4000-b20b-6a6d9f6135ea, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125)
Dec  1 09:44:31 compute-0 nova_compute[189491]: 2025-12-01 09:44:31.841 189495 DEBUG oslo_concurrency.processutils [None req-b9d9aa10-e3f0-404a-9b6b-6feb1ee02a36 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bfffb0fe9ffb8885e11e7a8e92aeafe5ed4e87fd --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:44:31 compute-0 systemd[1]: libpod-conmon-85801ec2ddaf3bf41f957ab27f0b434fef45631a0ec3ea69a8772f17bb2cea1c.scope: Deactivated successfully.
Dec  1 09:44:31 compute-0 nova_compute[189491]: 2025-12-01 09:44:31.912 189495 DEBUG oslo_concurrency.processutils [None req-b9d9aa10-e3f0-404a-9b6b-6feb1ee02a36 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bfffb0fe9ffb8885e11e7a8e92aeafe5ed4e87fd --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:44:31 compute-0 nova_compute[189491]: 2025-12-01 09:44:31.913 189495 DEBUG nova.virt.disk.api [None req-b9d9aa10-e3f0-404a-9b6b-6feb1ee02a36 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] Checking if we can resize image /var/lib/nova/instances/b5a25e93-8e59-4459-a45e-2d1d2d486bbc/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Dec  1 09:44:31 compute-0 nova_compute[189491]: 2025-12-01 09:44:31.913 189495 DEBUG oslo_concurrency.processutils [None req-b9d9aa10-e3f0-404a-9b6b-6feb1ee02a36 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5a25e93-8e59-4459-a45e-2d1d2d486bbc/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:44:31 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:44:31.935 106659 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:2b:76', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'f6:fe:a3:90:0a:20'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 09:44:31 compute-0 nova_compute[189491]: 2025-12-01 09:44:31.936 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:44:31 compute-0 nova_compute[189491]: 2025-12-01 09:44:31.986 189495 DEBUG oslo_concurrency.processutils [None req-b9d9aa10-e3f0-404a-9b6b-6feb1ee02a36 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5a25e93-8e59-4459-a45e-2d1d2d486bbc/disk --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:44:31 compute-0 nova_compute[189491]: 2025-12-01 09:44:31.987 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:44:31 compute-0 nova_compute[189491]: 2025-12-01 09:44:31.988 189495 DEBUG nova.virt.disk.api [None req-b9d9aa10-e3f0-404a-9b6b-6feb1ee02a36 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] Cannot resize image /var/lib/nova/instances/b5a25e93-8e59-4459-a45e-2d1d2d486bbc/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Dec  1 09:44:31 compute-0 nova_compute[189491]: 2025-12-01 09:44:31.988 189495 DEBUG nova.objects.instance [None req-b9d9aa10-e3f0-404a-9b6b-6feb1ee02a36 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] Lazy-loading 'migration_context' on Instance uuid b5a25e93-8e59-4459-a45e-2d1d2d486bbc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 09:44:32 compute-0 podman[253301]: 2025-12-01 09:44:32.090570886 +0000 UTC m=+0.222355821 container remove 85801ec2ddaf3bf41f957ab27f0b434fef45631a0ec3ea69a8772f17bb2cea1c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-528d6fcc-4f6c-4000-b20b-6a6d9f6135ea, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Dec  1 09:44:32 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:44:32.107 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[50f7a927-3f55-481d-bfcd-0978a518f11e]: (4, ('Mon Dec  1 09:44:31 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-528d6fcc-4f6c-4000-b20b-6a6d9f6135ea (85801ec2ddaf3bf41f957ab27f0b434fef45631a0ec3ea69a8772f17bb2cea1c)\n85801ec2ddaf3bf41f957ab27f0b434fef45631a0ec3ea69a8772f17bb2cea1c\nMon Dec  1 09:44:31 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-528d6fcc-4f6c-4000-b20b-6a6d9f6135ea (85801ec2ddaf3bf41f957ab27f0b434fef45631a0ec3ea69a8772f17bb2cea1c)\n85801ec2ddaf3bf41f957ab27f0b434fef45631a0ec3ea69a8772f17bb2cea1c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:44:32 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:44:32.111 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[7ef283e2-cae9-4a1d-97b4-e729f065ca68]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:44:32 compute-0 nova_compute[189491]: 2025-12-01 09:44:32.112 189495 DEBUG nova.virt.libvirt.vif [None req-b9d9aa10-e3f0-404a-9b6b-6feb1ee02a36 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-01T09:43:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-2131740452',display_name='tempest-ServerActionsTestJSON-server-2131740452',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-2131740452',id=8,image_ref='7ddeffd1-d06f-4a46-9e41-114974daa90e',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCQFUVYl1Xqq2gIQN4/eCJ8cnpGKeD2gZ7u/gkHTzBRwJJoku8v2NGbkC1lQIa8TB9NaZUcsSyfv1koauiYvXUFGYORBUpCcLDSn5ClA7+eTQ5bJXZBZqJiWDZmhR8SgRA==',key_name='tempest-keypair-1047797503',keypairs=<?>,launch_index=0,launched_at=2025-12-01T09:43:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='a5fc8e7c1a854418b0a110cc22e69de0',ramdisk_id='',reservation_id='r-k3gqld7r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7ddeffd1-d06f-4a46-9e41-114974daa90e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-253829526',owner_user_name='tempest-ServerActionsTestJSON-253829526-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=None,updated_at=2025-12-01T09:44:31Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7f215f81d0ab4d1fb34e21bf69e390fe',uuid=b5a25e93-8e59-4459-a45e-2d1d2d486bbc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9dc75317-7a9b-4763-9189-4ea68bfc3ccb", "address": "fa:16:3e:81:32:12", "network": {"id": "528d6fcc-4f6c-4000-b20b-6a6d9f6135ea", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1736415669-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a5fc8e7c1a854418b0a110cc22e69de0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9dc75317-7a", "ovs_interfaceid": "9dc75317-7a9b-4763-9189-4ea68bfc3ccb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  1 09:44:32 compute-0 nova_compute[189491]: 2025-12-01 09:44:32.113 189495 DEBUG nova.network.os_vif_util [None req-b9d9aa10-e3f0-404a-9b6b-6feb1ee02a36 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] Converting VIF {"id": "9dc75317-7a9b-4763-9189-4ea68bfc3ccb", "address": "fa:16:3e:81:32:12", "network": {"id": "528d6fcc-4f6c-4000-b20b-6a6d9f6135ea", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1736415669-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a5fc8e7c1a854418b0a110cc22e69de0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9dc75317-7a", "ovs_interfaceid": "9dc75317-7a9b-4763-9189-4ea68bfc3ccb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 09:44:32 compute-0 nova_compute[189491]: 2025-12-01 09:44:32.113 189495 DEBUG nova.network.os_vif_util [None req-b9d9aa10-e3f0-404a-9b6b-6feb1ee02a36 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:81:32:12,bridge_name='br-int',has_traffic_filtering=True,id=9dc75317-7a9b-4763-9189-4ea68bfc3ccb,network=Network(528d6fcc-4f6c-4000-b20b-6a6d9f6135ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9dc75317-7a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 09:44:32 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:44:32.112 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap528d6fcc-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:44:32 compute-0 nova_compute[189491]: 2025-12-01 09:44:32.114 189495 DEBUG os_vif [None req-b9d9aa10-e3f0-404a-9b6b-6feb1ee02a36 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:81:32:12,bridge_name='br-int',has_traffic_filtering=True,id=9dc75317-7a9b-4763-9189-4ea68bfc3ccb,network=Network(528d6fcc-4f6c-4000-b20b-6a6d9f6135ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9dc75317-7a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  1 09:44:32 compute-0 nova_compute[189491]: 2025-12-01 09:44:32.115 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:44:32 compute-0 nova_compute[189491]: 2025-12-01 09:44:32.115 189495 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:44:32 compute-0 nova_compute[189491]: 2025-12-01 09:44:32.115 189495 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 09:44:32 compute-0 nova_compute[189491]: 2025-12-01 09:44:32.117 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:44:32 compute-0 nova_compute[189491]: 2025-12-01 09:44:32.119 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:44:32 compute-0 nova_compute[189491]: 2025-12-01 09:44:32.120 189495 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9dc75317-7a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:44:32 compute-0 nova_compute[189491]: 2025-12-01 09:44:32.120 189495 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9dc75317-7a, col_values=(('external_ids', {'iface-id': '9dc75317-7a9b-4763-9189-4ea68bfc3ccb', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:81:32:12', 'vm-uuid': 'b5a25e93-8e59-4459-a45e-2d1d2d486bbc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:44:32 compute-0 nova_compute[189491]: 2025-12-01 09:44:32.122 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:44:32 compute-0 NetworkManager[56318]: <info>  [1764582272.1228] manager: (tap9dc75317-7a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/51)
Dec  1 09:44:32 compute-0 nova_compute[189491]: 2025-12-01 09:44:32.123 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  1 09:44:32 compute-0 kernel: tap528d6fcc-40: left promiscuous mode
Dec  1 09:44:32 compute-0 nova_compute[189491]: 2025-12-01 09:44:32.135 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:44:32 compute-0 nova_compute[189491]: 2025-12-01 09:44:32.137 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:44:32 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:44:32.140 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[cbd27ed7-253d-4724-9634-8fd96d77c8e3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:44:32 compute-0 nova_compute[189491]: 2025-12-01 09:44:32.146 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:44:32 compute-0 nova_compute[189491]: 2025-12-01 09:44:32.147 189495 INFO os_vif [None req-b9d9aa10-e3f0-404a-9b6b-6feb1ee02a36 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:81:32:12,bridge_name='br-int',has_traffic_filtering=True,id=9dc75317-7a9b-4763-9189-4ea68bfc3ccb,network=Network(528d6fcc-4f6c-4000-b20b-6a6d9f6135ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9dc75317-7a')#033[00m
Dec  1 09:44:32 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:44:32.160 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[3d3995ee-95ec-416f-a2f8-06a7d4bfd970]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:44:32 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:44:32.161 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[df78ab7a-d9c4-485f-a92b-4468aa22716b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:44:32 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:44:32.181 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[c97f717c-f3b9-4ebb-bee6-ae2792e49ea6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 543726, 'reachable_time': 19472, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 253321, 'error': None, 'target': 'ovnmeta-528d6fcc-4f6c-4000-b20b-6a6d9f6135ea', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:44:32 compute-0 systemd[1]: run-netns-ovnmeta\x2d528d6fcc\x2d4f6c\x2d4000\x2db20b\x2d6a6d9f6135ea.mount: Deactivated successfully.
Dec  1 09:44:32 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:44:32.188 106797 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-528d6fcc-4f6c-4000-b20b-6a6d9f6135ea deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  1 09:44:32 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:44:32.189 106797 DEBUG oslo.privsep.daemon [-] privsep: reply[34232260-8e1e-4fb8-8eb0-80a097cf5d3c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:44:32 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:44:32.191 106659 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  1 09:44:32 compute-0 kernel: tap9dc75317-7a: entered promiscuous mode
Dec  1 09:44:32 compute-0 systemd-udevd[253202]: Network interface NamePolicy= disabled on kernel command line.
Dec  1 09:44:32 compute-0 NetworkManager[56318]: <info>  [1764582272.2331] manager: (tap9dc75317-7a): new Tun device (/org/freedesktop/NetworkManager/Devices/52)
Dec  1 09:44:32 compute-0 nova_compute[189491]: 2025-12-01 09:44:32.236 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:44:32 compute-0 ovn_controller[97794]: 2025-12-01T09:44:32Z|00105|binding|INFO|Claiming lport 9dc75317-7a9b-4763-9189-4ea68bfc3ccb for this chassis.
Dec  1 09:44:32 compute-0 ovn_controller[97794]: 2025-12-01T09:44:32Z|00106|binding|INFO|9dc75317-7a9b-4763-9189-4ea68bfc3ccb: Claiming fa:16:3e:81:32:12 10.100.0.14
Dec  1 09:44:32 compute-0 NetworkManager[56318]: <info>  [1764582272.2572] device (tap9dc75317-7a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  1 09:44:32 compute-0 NetworkManager[56318]: <info>  [1764582272.2579] device (tap9dc75317-7a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  1 09:44:32 compute-0 ovn_controller[97794]: 2025-12-01T09:44:32Z|00107|binding|INFO|Setting lport 9dc75317-7a9b-4763-9189-4ea68bfc3ccb ovn-installed in OVS
Dec  1 09:44:32 compute-0 nova_compute[189491]: 2025-12-01 09:44:32.261 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:44:32 compute-0 nova_compute[189491]: 2025-12-01 09:44:32.270 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:44:32 compute-0 systemd-machined[155812]: New machine qemu-10-instance-00000008.
Dec  1 09:44:32 compute-0 systemd[1]: Started Virtual Machine qemu-10-instance-00000008.
Dec  1 09:44:32 compute-0 ovn_controller[97794]: 2025-12-01T09:44:32Z|00108|binding|INFO|Setting lport 9dc75317-7a9b-4763-9189-4ea68bfc3ccb up in Southbound
Dec  1 09:44:32 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:44:32.369 106659 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:81:32:12 10.100.0.14'], port_security=['fa:16:3e:81:32:12 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'b5a25e93-8e59-4459-a45e-2d1d2d486bbc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-528d6fcc-4f6c-4000-b20b-6a6d9f6135ea', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a5fc8e7c1a854418b0a110cc22e69de0', 'neutron:revision_number': '4', 'neutron:security_group_ids': '72afbc16-616c-4679-8b1b-dcb1251c5132', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.190'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3074f1d2-6f44-4fa9-90f3-bc6399575f2a, chassis=[<ovs.db.idl.Row object at 0x7ff7f12c3670>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff7f12c3670>], logical_port=9dc75317-7a9b-4763-9189-4ea68bfc3ccb) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 09:44:32 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:44:32.370 106659 INFO neutron.agent.ovn.metadata.agent [-] Port 9dc75317-7a9b-4763-9189-4ea68bfc3ccb in datapath 528d6fcc-4f6c-4000-b20b-6a6d9f6135ea bound to our chassis#033[00m
Dec  1 09:44:32 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:44:32.372 106659 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 528d6fcc-4f6c-4000-b20b-6a6d9f6135ea#033[00m
Dec  1 09:44:32 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:44:32.385 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[b625b2e3-ab28-4ca3-8cfa-7114bd5cc1ac]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:44:32 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:44:32.386 106659 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap528d6fcc-41 in ovnmeta-528d6fcc-4f6c-4000-b20b-6a6d9f6135ea namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  1 09:44:32 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:44:32.389 239818 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap528d6fcc-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  1 09:44:32 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:44:32.389 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[7d58fe72-67b7-4829-add0-722ab8941689]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:44:32 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:44:32.391 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[cf0dcd42-b105-4185-b315-3786c5ed233a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:44:32 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:44:32.407 106797 DEBUG oslo.privsep.daemon [-] privsep: reply[bbc70c4f-8155-4abd-a274-a4dc97289eb6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:44:32 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:44:32.444 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[1cceeb98-1af0-4579-bbd6-a501fad15c2b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:44:32 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:44:32.485 239843 DEBUG oslo.privsep.daemon [-] privsep: reply[297be38e-427b-4e34-a612-f45b04e5af28]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:44:32 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:44:32.515 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[5f48e96b-b62f-4fc0-8c0c-c8079a6cc392]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:44:32 compute-0 NetworkManager[56318]: <info>  [1764582272.5167] manager: (tap528d6fcc-40): new Veth device (/org/freedesktop/NetworkManager/Devices/53)
Dec  1 09:44:32 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:44:32.565 239843 DEBUG oslo.privsep.daemon [-] privsep: reply[e245715d-653e-4fe0-af93-a7c551012340]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:44:32 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:44:32.570 239843 DEBUG oslo.privsep.daemon [-] privsep: reply[f2beb3f1-64fe-4761-a5ed-75c35f1ae79f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:44:32 compute-0 NetworkManager[56318]: <info>  [1764582272.5979] device (tap528d6fcc-40): carrier: link connected
Dec  1 09:44:32 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:44:32.603 239843 DEBUG oslo.privsep.daemon [-] privsep: reply[53e795c1-df32-4a13-8c23-a0e9f71efc5c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:44:32 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:44:32.620 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[faa97a62-9605-41d1-9776-7679f0fdb772]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap528d6fcc-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:87:98:ee'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 32], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 551229, 'reachable_time': 42812, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 253369, 'error': None, 'target': 'ovnmeta-528d6fcc-4f6c-4000-b20b-6a6d9f6135ea', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:44:32 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:44:32.640 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[6478f135-0f5d-4fa0-99f3-505007563145]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe87:98ee'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 551229, 'tstamp': 551229}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 253370, 'error': None, 'target': 'ovnmeta-528d6fcc-4f6c-4000-b20b-6a6d9f6135ea', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:44:32 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:44:32.662 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[642a8753-8af3-4644-86cf-8eeaebcd741b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap528d6fcc-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:87:98:ee'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 32], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 551229, 'reachable_time': 42812, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 253371, 'error': None, 'target': 'ovnmeta-528d6fcc-4f6c-4000-b20b-6a6d9f6135ea', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:44:32 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:44:32.700 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[418a3eea-ad6a-4701-bebc-de691f06cd19]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:44:32 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:44:32.767 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[5f8da1ca-e19e-4cc5-aee7-e79d092e5f26]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:44:32 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:44:32.772 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap528d6fcc-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:44:32 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:44:32.773 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 09:44:32 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:44:32.773 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap528d6fcc-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:44:32 compute-0 nova_compute[189491]: 2025-12-01 09:44:32.776 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:44:32 compute-0 NetworkManager[56318]: <info>  [1764582272.7770] manager: (tap528d6fcc-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/54)
Dec  1 09:44:32 compute-0 kernel: tap528d6fcc-40: entered promiscuous mode
Dec  1 09:44:32 compute-0 nova_compute[189491]: 2025-12-01 09:44:32.782 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:44:32 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:44:32.783 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap528d6fcc-40, col_values=(('external_ids', {'iface-id': '8e3cbcf0-fa9b-4b7e-8d20-6f493c3e3d90'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:44:32 compute-0 ovn_controller[97794]: 2025-12-01T09:44:32Z|00109|binding|INFO|Releasing lport 8e3cbcf0-fa9b-4b7e-8d20-6f493c3e3d90 from this chassis (sb_readonly=0)
Dec  1 09:44:32 compute-0 nova_compute[189491]: 2025-12-01 09:44:32.785 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:44:32 compute-0 nova_compute[189491]: 2025-12-01 09:44:32.802 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:44:32 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:44:32.802 106659 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/528d6fcc-4f6c-4000-b20b-6a6d9f6135ea.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/528d6fcc-4f6c-4000-b20b-6a6d9f6135ea.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  1 09:44:32 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:44:32.805 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[89aa32e7-dbe6-46f8-ae2f-abc4a195f902]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:44:32 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:44:32.806 106659 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  1 09:44:32 compute-0 ovn_metadata_agent[106654]: global
Dec  1 09:44:32 compute-0 ovn_metadata_agent[106654]:    log         /dev/log local0 debug
Dec  1 09:44:32 compute-0 ovn_metadata_agent[106654]:    log-tag     haproxy-metadata-proxy-528d6fcc-4f6c-4000-b20b-6a6d9f6135ea
Dec  1 09:44:32 compute-0 ovn_metadata_agent[106654]:    user        root
Dec  1 09:44:32 compute-0 ovn_metadata_agent[106654]:    group       root
Dec  1 09:44:32 compute-0 ovn_metadata_agent[106654]:    maxconn     1024
Dec  1 09:44:32 compute-0 ovn_metadata_agent[106654]:    pidfile     /var/lib/neutron/external/pids/528d6fcc-4f6c-4000-b20b-6a6d9f6135ea.pid.haproxy
Dec  1 09:44:32 compute-0 ovn_metadata_agent[106654]:    daemon
Dec  1 09:44:32 compute-0 ovn_metadata_agent[106654]: 
Dec  1 09:44:32 compute-0 ovn_metadata_agent[106654]: defaults
Dec  1 09:44:32 compute-0 ovn_metadata_agent[106654]:    log global
Dec  1 09:44:32 compute-0 ovn_metadata_agent[106654]:    mode http
Dec  1 09:44:32 compute-0 ovn_metadata_agent[106654]:    option httplog
Dec  1 09:44:32 compute-0 ovn_metadata_agent[106654]:    option dontlognull
Dec  1 09:44:32 compute-0 ovn_metadata_agent[106654]:    option http-server-close
Dec  1 09:44:32 compute-0 ovn_metadata_agent[106654]:    option forwardfor
Dec  1 09:44:32 compute-0 ovn_metadata_agent[106654]:    retries                 3
Dec  1 09:44:32 compute-0 ovn_metadata_agent[106654]:    timeout http-request    30s
Dec  1 09:44:32 compute-0 ovn_metadata_agent[106654]:    timeout connect         30s
Dec  1 09:44:32 compute-0 ovn_metadata_agent[106654]:    timeout client          32s
Dec  1 09:44:32 compute-0 ovn_metadata_agent[106654]:    timeout server          32s
Dec  1 09:44:32 compute-0 ovn_metadata_agent[106654]:    timeout http-keep-alive 30s
Dec  1 09:44:32 compute-0 ovn_metadata_agent[106654]: 
Dec  1 09:44:32 compute-0 ovn_metadata_agent[106654]: 
Dec  1 09:44:32 compute-0 ovn_metadata_agent[106654]: listen listener
Dec  1 09:44:32 compute-0 ovn_metadata_agent[106654]:    bind 169.254.169.254:80
Dec  1 09:44:32 compute-0 ovn_metadata_agent[106654]:    server metadata /var/lib/neutron/metadata_proxy
Dec  1 09:44:32 compute-0 ovn_metadata_agent[106654]:    http-request add-header X-OVN-Network-ID 528d6fcc-4f6c-4000-b20b-6a6d9f6135ea
Dec  1 09:44:32 compute-0 ovn_metadata_agent[106654]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  1 09:44:32 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:44:32.810 106659 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-528d6fcc-4f6c-4000-b20b-6a6d9f6135ea', 'env', 'PROCESS_TAG=haproxy-528d6fcc-4f6c-4000-b20b-6a6d9f6135ea', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/528d6fcc-4f6c-4000-b20b-6a6d9f6135ea.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  1 09:44:32 compute-0 nova_compute[189491]: 2025-12-01 09:44:32.833 189495 DEBUG nova.compute.manager [req-73eabe56-acd0-40bc-a986-35fdc6090a52 req-521e1f9d-6731-4232-9872-543ffa4918a2 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: b5a25e93-8e59-4459-a45e-2d1d2d486bbc] Received event network-vif-unplugged-9dc75317-7a9b-4763-9189-4ea68bfc3ccb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 09:44:32 compute-0 nova_compute[189491]: 2025-12-01 09:44:32.834 189495 DEBUG oslo_concurrency.lockutils [req-73eabe56-acd0-40bc-a986-35fdc6090a52 req-521e1f9d-6731-4232-9872-543ffa4918a2 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Acquiring lock "b5a25e93-8e59-4459-a45e-2d1d2d486bbc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:44:32 compute-0 nova_compute[189491]: 2025-12-01 09:44:32.834 189495 DEBUG oslo_concurrency.lockutils [req-73eabe56-acd0-40bc-a986-35fdc6090a52 req-521e1f9d-6731-4232-9872-543ffa4918a2 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Lock "b5a25e93-8e59-4459-a45e-2d1d2d486bbc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:44:32 compute-0 nova_compute[189491]: 2025-12-01 09:44:32.834 189495 DEBUG oslo_concurrency.lockutils [req-73eabe56-acd0-40bc-a986-35fdc6090a52 req-521e1f9d-6731-4232-9872-543ffa4918a2 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Lock "b5a25e93-8e59-4459-a45e-2d1d2d486bbc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:44:32 compute-0 nova_compute[189491]: 2025-12-01 09:44:32.834 189495 DEBUG nova.compute.manager [req-73eabe56-acd0-40bc-a986-35fdc6090a52 req-521e1f9d-6731-4232-9872-543ffa4918a2 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: b5a25e93-8e59-4459-a45e-2d1d2d486bbc] No waiting events found dispatching network-vif-unplugged-9dc75317-7a9b-4763-9189-4ea68bfc3ccb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 09:44:32 compute-0 nova_compute[189491]: 2025-12-01 09:44:32.835 189495 WARNING nova.compute.manager [req-73eabe56-acd0-40bc-a986-35fdc6090a52 req-521e1f9d-6731-4232-9872-543ffa4918a2 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: b5a25e93-8e59-4459-a45e-2d1d2d486bbc] Received unexpected event network-vif-unplugged-9dc75317-7a9b-4763-9189-4ea68bfc3ccb for instance with vm_state active and task_state reboot_started_hard.#033[00m
Dec  1 09:44:32 compute-0 nova_compute[189491]: 2025-12-01 09:44:32.874 189495 DEBUG nova.virt.libvirt.host [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] Removed pending event for b5a25e93-8e59-4459-a45e-2d1d2d486bbc due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Dec  1 09:44:32 compute-0 nova_compute[189491]: 2025-12-01 09:44:32.874 189495 DEBUG nova.virt.driver [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] Emitting event <LifecycleEvent: 1764582272.8736238, b5a25e93-8e59-4459-a45e-2d1d2d486bbc => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 09:44:32 compute-0 nova_compute[189491]: 2025-12-01 09:44:32.875 189495 INFO nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: b5a25e93-8e59-4459-a45e-2d1d2d486bbc] VM Resumed (Lifecycle Event)#033[00m
Dec  1 09:44:32 compute-0 nova_compute[189491]: 2025-12-01 09:44:32.879 189495 DEBUG nova.compute.manager [None req-b9d9aa10-e3f0-404a-9b6b-6feb1ee02a36 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] [instance: b5a25e93-8e59-4459-a45e-2d1d2d486bbc] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  1 09:44:32 compute-0 nova_compute[189491]: 2025-12-01 09:44:32.884 189495 INFO nova.virt.libvirt.driver [-] [instance: b5a25e93-8e59-4459-a45e-2d1d2d486bbc] Instance rebooted successfully.#033[00m
Dec  1 09:44:32 compute-0 nova_compute[189491]: 2025-12-01 09:44:32.884 189495 DEBUG nova.compute.manager [None req-b9d9aa10-e3f0-404a-9b6b-6feb1ee02a36 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] [instance: b5a25e93-8e59-4459-a45e-2d1d2d486bbc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 09:44:32 compute-0 nova_compute[189491]: 2025-12-01 09:44:32.943 189495 DEBUG nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: b5a25e93-8e59-4459-a45e-2d1d2d486bbc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 09:44:32 compute-0 nova_compute[189491]: 2025-12-01 09:44:32.948 189495 DEBUG nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: b5a25e93-8e59-4459-a45e-2d1d2d486bbc] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  1 09:44:33 compute-0 nova_compute[189491]: 2025-12-01 09:44:33.140 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:44:33 compute-0 nova_compute[189491]: 2025-12-01 09:44:33.176 189495 INFO nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: b5a25e93-8e59-4459-a45e-2d1d2d486bbc] During sync_power_state the instance has a pending task (reboot_started_hard). Skip.#033[00m
Dec  1 09:44:33 compute-0 nova_compute[189491]: 2025-12-01 09:44:33.177 189495 DEBUG nova.virt.driver [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] Emitting event <LifecycleEvent: 1764582272.879637, b5a25e93-8e59-4459-a45e-2d1d2d486bbc => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 09:44:33 compute-0 nova_compute[189491]: 2025-12-01 09:44:33.177 189495 INFO nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: b5a25e93-8e59-4459-a45e-2d1d2d486bbc] VM Started (Lifecycle Event)#033[00m
Dec  1 09:44:33 compute-0 nova_compute[189491]: 2025-12-01 09:44:33.191 189495 DEBUG oslo_concurrency.lockutils [None req-b9d9aa10-e3f0-404a-9b6b-6feb1ee02a36 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] Lock "b5a25e93-8e59-4459-a45e-2d1d2d486bbc" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 4.665s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:44:33 compute-0 nova_compute[189491]: 2025-12-01 09:44:33.269 189495 DEBUG nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: b5a25e93-8e59-4459-a45e-2d1d2d486bbc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 09:44:33 compute-0 nova_compute[189491]: 2025-12-01 09:44:33.276 189495 DEBUG nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: b5a25e93-8e59-4459-a45e-2d1d2d486bbc] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  1 09:44:33 compute-0 podman[253410]: 2025-12-01 09:44:33.222164838 +0000 UTC m=+0.037102046 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  1 09:44:33 compute-0 podman[253410]: 2025-12-01 09:44:33.359049501 +0000 UTC m=+0.173986939 container create 22ae50d543af2fea44af619bbd6caa1db28d45622bb6f0b1e5daf7e0c1cd9181 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-528d6fcc-4f6c-4000-b20b-6a6d9f6135ea, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125)
Dec  1 09:44:33 compute-0 systemd[1]: Started libpod-conmon-22ae50d543af2fea44af619bbd6caa1db28d45622bb6f0b1e5daf7e0c1cd9181.scope.
Dec  1 09:44:33 compute-0 systemd[1]: Started libcrun container.
Dec  1 09:44:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac9ac3ac428e39bbc7dc4365e0bef6aa8988c9e272e253bf37016ccb16595493/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  1 09:44:33 compute-0 podman[253410]: 2025-12-01 09:44:33.661679411 +0000 UTC m=+0.476616619 container init 22ae50d543af2fea44af619bbd6caa1db28d45622bb6f0b1e5daf7e0c1cd9181 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-528d6fcc-4f6c-4000-b20b-6a6d9f6135ea, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec  1 09:44:33 compute-0 podman[253410]: 2025-12-01 09:44:33.66982511 +0000 UTC m=+0.484762298 container start 22ae50d543af2fea44af619bbd6caa1db28d45622bb6f0b1e5daf7e0c1cd9181 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-528d6fcc-4f6c-4000-b20b-6a6d9f6135ea, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 09:44:33 compute-0 neutron-haproxy-ovnmeta-528d6fcc-4f6c-4000-b20b-6a6d9f6135ea[253424]: [NOTICE]   (253428) : New worker (253430) forked
Dec  1 09:44:33 compute-0 neutron-haproxy-ovnmeta-528d6fcc-4f6c-4000-b20b-6a6d9f6135ea[253424]: [NOTICE]   (253428) : Loading success.
Dec  1 09:44:34 compute-0 nova_compute[189491]: 2025-12-01 09:44:34.927 189495 DEBUG nova.compute.manager [req-4cc03a3f-6eb2-443c-b0b9-417cda32aab6 req-32ba8d87-b50d-45b0-818e-1016690b2365 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: b5a25e93-8e59-4459-a45e-2d1d2d486bbc] Received event network-vif-plugged-9dc75317-7a9b-4763-9189-4ea68bfc3ccb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 09:44:34 compute-0 nova_compute[189491]: 2025-12-01 09:44:34.928 189495 DEBUG oslo_concurrency.lockutils [req-4cc03a3f-6eb2-443c-b0b9-417cda32aab6 req-32ba8d87-b50d-45b0-818e-1016690b2365 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Acquiring lock "b5a25e93-8e59-4459-a45e-2d1d2d486bbc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:44:34 compute-0 nova_compute[189491]: 2025-12-01 09:44:34.928 189495 DEBUG oslo_concurrency.lockutils [req-4cc03a3f-6eb2-443c-b0b9-417cda32aab6 req-32ba8d87-b50d-45b0-818e-1016690b2365 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Lock "b5a25e93-8e59-4459-a45e-2d1d2d486bbc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:44:34 compute-0 nova_compute[189491]: 2025-12-01 09:44:34.928 189495 DEBUG oslo_concurrency.lockutils [req-4cc03a3f-6eb2-443c-b0b9-417cda32aab6 req-32ba8d87-b50d-45b0-818e-1016690b2365 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Lock "b5a25e93-8e59-4459-a45e-2d1d2d486bbc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:44:34 compute-0 nova_compute[189491]: 2025-12-01 09:44:34.929 189495 DEBUG nova.compute.manager [req-4cc03a3f-6eb2-443c-b0b9-417cda32aab6 req-32ba8d87-b50d-45b0-818e-1016690b2365 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: b5a25e93-8e59-4459-a45e-2d1d2d486bbc] No waiting events found dispatching network-vif-plugged-9dc75317-7a9b-4763-9189-4ea68bfc3ccb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 09:44:34 compute-0 nova_compute[189491]: 2025-12-01 09:44:34.929 189495 WARNING nova.compute.manager [req-4cc03a3f-6eb2-443c-b0b9-417cda32aab6 req-32ba8d87-b50d-45b0-818e-1016690b2365 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: b5a25e93-8e59-4459-a45e-2d1d2d486bbc] Received unexpected event network-vif-plugged-9dc75317-7a9b-4763-9189-4ea68bfc3ccb for instance with vm_state active and task_state None.#033[00m
Dec  1 09:44:34 compute-0 nova_compute[189491]: 2025-12-01 09:44:34.929 189495 DEBUG nova.compute.manager [req-4cc03a3f-6eb2-443c-b0b9-417cda32aab6 req-32ba8d87-b50d-45b0-818e-1016690b2365 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: b5a25e93-8e59-4459-a45e-2d1d2d486bbc] Received event network-vif-plugged-9dc75317-7a9b-4763-9189-4ea68bfc3ccb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 09:44:34 compute-0 nova_compute[189491]: 2025-12-01 09:44:34.929 189495 DEBUG oslo_concurrency.lockutils [req-4cc03a3f-6eb2-443c-b0b9-417cda32aab6 req-32ba8d87-b50d-45b0-818e-1016690b2365 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Acquiring lock "b5a25e93-8e59-4459-a45e-2d1d2d486bbc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:44:34 compute-0 nova_compute[189491]: 2025-12-01 09:44:34.929 189495 DEBUG oslo_concurrency.lockutils [req-4cc03a3f-6eb2-443c-b0b9-417cda32aab6 req-32ba8d87-b50d-45b0-818e-1016690b2365 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Lock "b5a25e93-8e59-4459-a45e-2d1d2d486bbc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:44:34 compute-0 nova_compute[189491]: 2025-12-01 09:44:34.930 189495 DEBUG oslo_concurrency.lockutils [req-4cc03a3f-6eb2-443c-b0b9-417cda32aab6 req-32ba8d87-b50d-45b0-818e-1016690b2365 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Lock "b5a25e93-8e59-4459-a45e-2d1d2d486bbc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:44:34 compute-0 nova_compute[189491]: 2025-12-01 09:44:34.930 189495 DEBUG nova.compute.manager [req-4cc03a3f-6eb2-443c-b0b9-417cda32aab6 req-32ba8d87-b50d-45b0-818e-1016690b2365 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: b5a25e93-8e59-4459-a45e-2d1d2d486bbc] No waiting events found dispatching network-vif-plugged-9dc75317-7a9b-4763-9189-4ea68bfc3ccb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 09:44:34 compute-0 nova_compute[189491]: 2025-12-01 09:44:34.930 189495 WARNING nova.compute.manager [req-4cc03a3f-6eb2-443c-b0b9-417cda32aab6 req-32ba8d87-b50d-45b0-818e-1016690b2365 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: b5a25e93-8e59-4459-a45e-2d1d2d486bbc] Received unexpected event network-vif-plugged-9dc75317-7a9b-4763-9189-4ea68bfc3ccb for instance with vm_state active and task_state None.#033[00m
Dec  1 09:44:36 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:44:36.193 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=203a4433-d8f4-4d80-8084-548a6d57cd5d, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:44:37 compute-0 nova_compute[189491]: 2025-12-01 09:44:37.123 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:44:37 compute-0 nova_compute[189491]: 2025-12-01 09:44:37.293 189495 DEBUG nova.compute.manager [req-777c5a3e-e2f9-4f01-98fb-4ec7e36b09dd req-8c84c941-8466-419b-90b0-6070ca3881a5 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: b5a25e93-8e59-4459-a45e-2d1d2d486bbc] Received event network-vif-plugged-9dc75317-7a9b-4763-9189-4ea68bfc3ccb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 09:44:37 compute-0 nova_compute[189491]: 2025-12-01 09:44:37.294 189495 DEBUG oslo_concurrency.lockutils [req-777c5a3e-e2f9-4f01-98fb-4ec7e36b09dd req-8c84c941-8466-419b-90b0-6070ca3881a5 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Acquiring lock "b5a25e93-8e59-4459-a45e-2d1d2d486bbc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:44:37 compute-0 nova_compute[189491]: 2025-12-01 09:44:37.294 189495 DEBUG oslo_concurrency.lockutils [req-777c5a3e-e2f9-4f01-98fb-4ec7e36b09dd req-8c84c941-8466-419b-90b0-6070ca3881a5 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Lock "b5a25e93-8e59-4459-a45e-2d1d2d486bbc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:44:37 compute-0 nova_compute[189491]: 2025-12-01 09:44:37.294 189495 DEBUG oslo_concurrency.lockutils [req-777c5a3e-e2f9-4f01-98fb-4ec7e36b09dd req-8c84c941-8466-419b-90b0-6070ca3881a5 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Lock "b5a25e93-8e59-4459-a45e-2d1d2d486bbc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:44:37 compute-0 nova_compute[189491]: 2025-12-01 09:44:37.295 189495 DEBUG nova.compute.manager [req-777c5a3e-e2f9-4f01-98fb-4ec7e36b09dd req-8c84c941-8466-419b-90b0-6070ca3881a5 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: b5a25e93-8e59-4459-a45e-2d1d2d486bbc] No waiting events found dispatching network-vif-plugged-9dc75317-7a9b-4763-9189-4ea68bfc3ccb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 09:44:37 compute-0 nova_compute[189491]: 2025-12-01 09:44:37.295 189495 WARNING nova.compute.manager [req-777c5a3e-e2f9-4f01-98fb-4ec7e36b09dd req-8c84c941-8466-419b-90b0-6070ca3881a5 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: b5a25e93-8e59-4459-a45e-2d1d2d486bbc] Received unexpected event network-vif-plugged-9dc75317-7a9b-4763-9189-4ea68bfc3ccb for instance with vm_state active and task_state None.#033[00m
Dec  1 09:44:38 compute-0 nova_compute[189491]: 2025-12-01 09:44:38.144 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:44:38 compute-0 nova_compute[189491]: 2025-12-01 09:44:38.189 189495 DEBUG nova.compute.manager [req-ff1eb14a-bc96-44a8-845f-7cbf85d4bc88 req-5bbe9d6a-78dc-4fc8-8545-6a595d44a690 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 70f48496-14bd-4e6f-8706-262d8e6b9510] Received event network-changed-9ba63f14-2eaa-45bf-8c16-59bd3a7893de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 09:44:38 compute-0 nova_compute[189491]: 2025-12-01 09:44:38.190 189495 DEBUG nova.compute.manager [req-ff1eb14a-bc96-44a8-845f-7cbf85d4bc88 req-5bbe9d6a-78dc-4fc8-8545-6a595d44a690 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 70f48496-14bd-4e6f-8706-262d8e6b9510] Refreshing instance network info cache due to event network-changed-9ba63f14-2eaa-45bf-8c16-59bd3a7893de. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  1 09:44:38 compute-0 nova_compute[189491]: 2025-12-01 09:44:38.190 189495 DEBUG oslo_concurrency.lockutils [req-ff1eb14a-bc96-44a8-845f-7cbf85d4bc88 req-5bbe9d6a-78dc-4fc8-8545-6a595d44a690 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Acquiring lock "refresh_cache-70f48496-14bd-4e6f-8706-262d8e6b9510" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 09:44:38 compute-0 nova_compute[189491]: 2025-12-01 09:44:38.190 189495 DEBUG oslo_concurrency.lockutils [req-ff1eb14a-bc96-44a8-845f-7cbf85d4bc88 req-5bbe9d6a-78dc-4fc8-8545-6a595d44a690 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Acquired lock "refresh_cache-70f48496-14bd-4e6f-8706-262d8e6b9510" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 09:44:38 compute-0 nova_compute[189491]: 2025-12-01 09:44:38.191 189495 DEBUG nova.network.neutron [req-ff1eb14a-bc96-44a8-845f-7cbf85d4bc88 req-5bbe9d6a-78dc-4fc8-8545-6a595d44a690 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 70f48496-14bd-4e6f-8706-262d8e6b9510] Refreshing network info cache for port 9ba63f14-2eaa-45bf-8c16-59bd3a7893de _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  1 09:44:38 compute-0 podman[253440]: 2025-12-01 09:44:38.695603443 +0000 UTC m=+0.072241374 container health_status 110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, managed_by=edpm_ansible, distribution-scope=public, container_name=openstack_network_exporter, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, name=ubi9-minimal)
Dec  1 09:44:38 compute-0 podman[253441]: 2025-12-01 09:44:38.702126633 +0000 UTC m=+0.074643684 container health_status f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec  1 09:44:38 compute-0 nova_compute[189491]: 2025-12-01 09:44:38.713 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:44:38 compute-0 nova_compute[189491]: 2025-12-01 09:44:38.713 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 09:44:39 compute-0 nova_compute[189491]: 2025-12-01 09:44:39.451 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "refresh_cache-b5a25e93-8e59-4459-a45e-2d1d2d486bbc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 09:44:39 compute-0 nova_compute[189491]: 2025-12-01 09:44:39.452 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquired lock "refresh_cache-b5a25e93-8e59-4459-a45e-2d1d2d486bbc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 09:44:39 compute-0 nova_compute[189491]: 2025-12-01 09:44:39.453 189495 DEBUG nova.network.neutron [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] [instance: b5a25e93-8e59-4459-a45e-2d1d2d486bbc] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  1 09:44:41 compute-0 nova_compute[189491]: 2025-12-01 09:44:41.925 189495 DEBUG nova.network.neutron [req-ff1eb14a-bc96-44a8-845f-7cbf85d4bc88 req-5bbe9d6a-78dc-4fc8-8545-6a595d44a690 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 70f48496-14bd-4e6f-8706-262d8e6b9510] Updated VIF entry in instance network info cache for port 9ba63f14-2eaa-45bf-8c16-59bd3a7893de. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  1 09:44:41 compute-0 nova_compute[189491]: 2025-12-01 09:44:41.927 189495 DEBUG nova.network.neutron [req-ff1eb14a-bc96-44a8-845f-7cbf85d4bc88 req-5bbe9d6a-78dc-4fc8-8545-6a595d44a690 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 70f48496-14bd-4e6f-8706-262d8e6b9510] Updating instance_info_cache with network_info: [{"id": "9ba63f14-2eaa-45bf-8c16-59bd3a7893de", "address": "fa:16:3e:06:a3:58", "network": {"id": "4f3e9b63-cba6-412e-ba07-d66a8b38af02", "bridge": "br-int", "label": "tempest-network-smoke--1085714181", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee60ff0d117e468aa42c7d39022568ea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ba63f14-2e", "ovs_interfaceid": "9ba63f14-2eaa-45bf-8c16-59bd3a7893de", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 09:44:42 compute-0 nova_compute[189491]: 2025-12-01 09:44:42.127 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:44:42 compute-0 nova_compute[189491]: 2025-12-01 09:44:42.494 189495 DEBUG oslo_concurrency.lockutils [req-ff1eb14a-bc96-44a8-845f-7cbf85d4bc88 req-5bbe9d6a-78dc-4fc8-8545-6a595d44a690 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Releasing lock "refresh_cache-70f48496-14bd-4e6f-8706-262d8e6b9510" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 09:44:42 compute-0 podman[253475]: 2025-12-01 09:44:42.703993794 +0000 UTC m=+0.073458905 container health_status 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Dec  1 09:44:42 compute-0 podman[253476]: 2025-12-01 09:44:42.751646847 +0000 UTC m=+0.121185099 container health_status 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec  1 09:44:43 compute-0 nova_compute[189491]: 2025-12-01 09:44:43.147 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:44:43 compute-0 nova_compute[189491]: 2025-12-01 09:44:43.611 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:44:44 compute-0 nova_compute[189491]: 2025-12-01 09:44:44.067 189495 DEBUG nova.network.neutron [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] [instance: b5a25e93-8e59-4459-a45e-2d1d2d486bbc] Updating instance_info_cache with network_info: [{"id": "9dc75317-7a9b-4763-9189-4ea68bfc3ccb", "address": "fa:16:3e:81:32:12", "network": {"id": "528d6fcc-4f6c-4000-b20b-6a6d9f6135ea", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1736415669-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a5fc8e7c1a854418b0a110cc22e69de0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9dc75317-7a", "ovs_interfaceid": "9dc75317-7a9b-4763-9189-4ea68bfc3ccb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 09:44:44 compute-0 nova_compute[189491]: 2025-12-01 09:44:44.092 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Releasing lock "refresh_cache-b5a25e93-8e59-4459-a45e-2d1d2d486bbc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 09:44:44 compute-0 nova_compute[189491]: 2025-12-01 09:44:44.093 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] [instance: b5a25e93-8e59-4459-a45e-2d1d2d486bbc] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  1 09:44:45 compute-0 nova_compute[189491]: 2025-12-01 09:44:45.713 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:44:45 compute-0 nova_compute[189491]: 2025-12-01 09:44:45.715 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:44:45 compute-0 nova_compute[189491]: 2025-12-01 09:44:45.756 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:44:45 compute-0 nova_compute[189491]: 2025-12-01 09:44:45.757 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:44:45 compute-0 nova_compute[189491]: 2025-12-01 09:44:45.758 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:44:45 compute-0 nova_compute[189491]: 2025-12-01 09:44:45.758 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 09:44:45 compute-0 nova_compute[189491]: 2025-12-01 09:44:45.795 189495 DEBUG oslo_concurrency.lockutils [None req-18ff347c-05ba-43e6-8aaf-c7477c13626e 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] Acquiring lock "332bb5cd-96b4-43a8-9d53-1d889d5e2df8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:44:45 compute-0 nova_compute[189491]: 2025-12-01 09:44:45.797 189495 DEBUG oslo_concurrency.lockutils [None req-18ff347c-05ba-43e6-8aaf-c7477c13626e 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] Lock "332bb5cd-96b4-43a8-9d53-1d889d5e2df8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:44:45 compute-0 nova_compute[189491]: 2025-12-01 09:44:45.829 189495 DEBUG nova.compute.manager [None req-18ff347c-05ba-43e6-8aaf-c7477c13626e 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] [instance: 332bb5cd-96b4-43a8-9d53-1d889d5e2df8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  1 09:44:46 compute-0 nova_compute[189491]: 2025-12-01 09:44:46.272 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5a25e93-8e59-4459-a45e-2d1d2d486bbc/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:44:46 compute-0 nova_compute[189491]: 2025-12-01 09:44:46.344 189495 DEBUG oslo_concurrency.lockutils [None req-18ff347c-05ba-43e6-8aaf-c7477c13626e 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:44:46 compute-0 nova_compute[189491]: 2025-12-01 09:44:46.345 189495 DEBUG oslo_concurrency.lockutils [None req-18ff347c-05ba-43e6-8aaf-c7477c13626e 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:44:46 compute-0 nova_compute[189491]: 2025-12-01 09:44:46.353 189495 DEBUG nova.virt.hardware [None req-18ff347c-05ba-43e6-8aaf-c7477c13626e 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  1 09:44:46 compute-0 nova_compute[189491]: 2025-12-01 09:44:46.354 189495 INFO nova.compute.claims [None req-18ff347c-05ba-43e6-8aaf-c7477c13626e 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] [instance: 332bb5cd-96b4-43a8-9d53-1d889d5e2df8] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  1 09:44:46 compute-0 nova_compute[189491]: 2025-12-01 09:44:46.373 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5a25e93-8e59-4459-a45e-2d1d2d486bbc/disk --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:44:46 compute-0 nova_compute[189491]: 2025-12-01 09:44:46.374 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5a25e93-8e59-4459-a45e-2d1d2d486bbc/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:44:46 compute-0 nova_compute[189491]: 2025-12-01 09:44:46.430 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5a25e93-8e59-4459-a45e-2d1d2d486bbc/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:44:46 compute-0 nova_compute[189491]: 2025-12-01 09:44:46.437 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/70f48496-14bd-4e6f-8706-262d8e6b9510/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:44:46 compute-0 nova_compute[189491]: 2025-12-01 09:44:46.502 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/70f48496-14bd-4e6f-8706-262d8e6b9510/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:44:46 compute-0 nova_compute[189491]: 2025-12-01 09:44:46.503 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/70f48496-14bd-4e6f-8706-262d8e6b9510/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:44:46 compute-0 nova_compute[189491]: 2025-12-01 09:44:46.572 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/70f48496-14bd-4e6f-8706-262d8e6b9510/disk --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:44:47 compute-0 nova_compute[189491]: 2025-12-01 09:44:47.105 189495 DEBUG nova.compute.provider_tree [None req-18ff347c-05ba-43e6-8aaf-c7477c13626e 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] Inventory has not changed in ProviderTree for provider: 143c7fe7-af1f-477a-978c-6a994d785d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 09:44:47 compute-0 nova_compute[189491]: 2025-12-01 09:44:47.111 189495 WARNING nova.virt.libvirt.driver [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 09:44:47 compute-0 nova_compute[189491]: 2025-12-01 09:44:47.112 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5026MB free_disk=72.3106918334961GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 09:44:47 compute-0 nova_compute[189491]: 2025-12-01 09:44:47.112 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:44:47 compute-0 nova_compute[189491]: 2025-12-01 09:44:47.131 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:44:47 compute-0 nova_compute[189491]: 2025-12-01 09:44:47.138 189495 DEBUG nova.scheduler.client.report [None req-18ff347c-05ba-43e6-8aaf-c7477c13626e 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] Inventory has not changed for provider 143c7fe7-af1f-477a-978c-6a994d785d98 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 09:44:47 compute-0 nova_compute[189491]: 2025-12-01 09:44:47.159 189495 DEBUG oslo_concurrency.lockutils [None req-18ff347c-05ba-43e6-8aaf-c7477c13626e 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.814s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:44:47 compute-0 nova_compute[189491]: 2025-12-01 09:44:47.160 189495 DEBUG nova.compute.manager [None req-18ff347c-05ba-43e6-8aaf-c7477c13626e 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] [instance: 332bb5cd-96b4-43a8-9d53-1d889d5e2df8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  1 09:44:47 compute-0 nova_compute[189491]: 2025-12-01 09:44:47.164 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.052s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:44:47 compute-0 nova_compute[189491]: 2025-12-01 09:44:47.487 189495 DEBUG nova.compute.manager [None req-18ff347c-05ba-43e6-8aaf-c7477c13626e 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] [instance: 332bb5cd-96b4-43a8-9d53-1d889d5e2df8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  1 09:44:47 compute-0 nova_compute[189491]: 2025-12-01 09:44:47.497 189495 DEBUG nova.network.neutron [None req-18ff347c-05ba-43e6-8aaf-c7477c13626e 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] [instance: 332bb5cd-96b4-43a8-9d53-1d889d5e2df8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  1 09:44:47 compute-0 nova_compute[189491]: 2025-12-01 09:44:47.550 189495 INFO nova.virt.libvirt.driver [None req-18ff347c-05ba-43e6-8aaf-c7477c13626e 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] [instance: 332bb5cd-96b4-43a8-9d53-1d889d5e2df8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  1 09:44:47 compute-0 nova_compute[189491]: 2025-12-01 09:44:47.568 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Instance b5a25e93-8e59-4459-a45e-2d1d2d486bbc actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 09:44:47 compute-0 nova_compute[189491]: 2025-12-01 09:44:47.569 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Instance 70f48496-14bd-4e6f-8706-262d8e6b9510 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 09:44:47 compute-0 nova_compute[189491]: 2025-12-01 09:44:47.569 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Instance 332bb5cd-96b4-43a8-9d53-1d889d5e2df8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 09:44:47 compute-0 nova_compute[189491]: 2025-12-01 09:44:47.570 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 09:44:47 compute-0 nova_compute[189491]: 2025-12-01 09:44:47.570 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=79GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 09:44:47 compute-0 nova_compute[189491]: 2025-12-01 09:44:47.649 189495 DEBUG nova.compute.provider_tree [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Inventory has not changed in ProviderTree for provider: 143c7fe7-af1f-477a-978c-6a994d785d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 09:44:47 compute-0 nova_compute[189491]: 2025-12-01 09:44:47.676 189495 DEBUG nova.scheduler.client.report [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Inventory has not changed for provider 143c7fe7-af1f-477a-978c-6a994d785d98 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 09:44:47 compute-0 nova_compute[189491]: 2025-12-01 09:44:47.779 189495 DEBUG nova.compute.manager [None req-18ff347c-05ba-43e6-8aaf-c7477c13626e 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] [instance: 332bb5cd-96b4-43a8-9d53-1d889d5e2df8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  1 09:44:47 compute-0 nova_compute[189491]: 2025-12-01 09:44:47.815 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 09:44:47 compute-0 nova_compute[189491]: 2025-12-01 09:44:47.816 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.652s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:44:47 compute-0 nova_compute[189491]: 2025-12-01 09:44:47.979 189495 DEBUG nova.policy [None req-18ff347c-05ba-43e6-8aaf-c7477c13626e 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '22ae22ecd4ce4774b704b3aa723962b8', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '104bf2f5f6f1439e9fc460940d474ff7', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  1 09:44:48 compute-0 nova_compute[189491]: 2025-12-01 09:44:48.150 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:44:48 compute-0 nova_compute[189491]: 2025-12-01 09:44:48.300 189495 DEBUG nova.compute.manager [None req-18ff347c-05ba-43e6-8aaf-c7477c13626e 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] [instance: 332bb5cd-96b4-43a8-9d53-1d889d5e2df8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  1 09:44:48 compute-0 nova_compute[189491]: 2025-12-01 09:44:48.303 189495 DEBUG nova.virt.libvirt.driver [None req-18ff347c-05ba-43e6-8aaf-c7477c13626e 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] [instance: 332bb5cd-96b4-43a8-9d53-1d889d5e2df8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  1 09:44:48 compute-0 nova_compute[189491]: 2025-12-01 09:44:48.304 189495 INFO nova.virt.libvirt.driver [None req-18ff347c-05ba-43e6-8aaf-c7477c13626e 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] [instance: 332bb5cd-96b4-43a8-9d53-1d889d5e2df8] Creating image(s)#033[00m
Dec  1 09:44:48 compute-0 nova_compute[189491]: 2025-12-01 09:44:48.306 189495 DEBUG oslo_concurrency.lockutils [None req-18ff347c-05ba-43e6-8aaf-c7477c13626e 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] Acquiring lock "/var/lib/nova/instances/332bb5cd-96b4-43a8-9d53-1d889d5e2df8/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:44:48 compute-0 nova_compute[189491]: 2025-12-01 09:44:48.307 189495 DEBUG oslo_concurrency.lockutils [None req-18ff347c-05ba-43e6-8aaf-c7477c13626e 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] Lock "/var/lib/nova/instances/332bb5cd-96b4-43a8-9d53-1d889d5e2df8/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:44:48 compute-0 nova_compute[189491]: 2025-12-01 09:44:48.309 189495 DEBUG oslo_concurrency.lockutils [None req-18ff347c-05ba-43e6-8aaf-c7477c13626e 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] Lock "/var/lib/nova/instances/332bb5cd-96b4-43a8-9d53-1d889d5e2df8/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:44:48 compute-0 nova_compute[189491]: 2025-12-01 09:44:48.336 189495 DEBUG oslo_concurrency.processutils [None req-18ff347c-05ba-43e6-8aaf-c7477c13626e 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bfffb0fe9ffb8885e11e7a8e92aeafe5ed4e87fd --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:44:48 compute-0 nova_compute[189491]: 2025-12-01 09:44:48.437 189495 DEBUG oslo_concurrency.processutils [None req-18ff347c-05ba-43e6-8aaf-c7477c13626e 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bfffb0fe9ffb8885e11e7a8e92aeafe5ed4e87fd --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:44:48 compute-0 nova_compute[189491]: 2025-12-01 09:44:48.444 189495 DEBUG oslo_concurrency.lockutils [None req-18ff347c-05ba-43e6-8aaf-c7477c13626e 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] Acquiring lock "bfffb0fe9ffb8885e11e7a8e92aeafe5ed4e87fd" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:44:48 compute-0 nova_compute[189491]: 2025-12-01 09:44:48.445 189495 DEBUG oslo_concurrency.lockutils [None req-18ff347c-05ba-43e6-8aaf-c7477c13626e 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] Lock "bfffb0fe9ffb8885e11e7a8e92aeafe5ed4e87fd" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:44:48 compute-0 nova_compute[189491]: 2025-12-01 09:44:48.459 189495 DEBUG oslo_concurrency.processutils [None req-18ff347c-05ba-43e6-8aaf-c7477c13626e 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bfffb0fe9ffb8885e11e7a8e92aeafe5ed4e87fd --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:44:48 compute-0 nova_compute[189491]: 2025-12-01 09:44:48.522 189495 DEBUG oslo_concurrency.processutils [None req-18ff347c-05ba-43e6-8aaf-c7477c13626e 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bfffb0fe9ffb8885e11e7a8e92aeafe5ed4e87fd --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:44:48 compute-0 nova_compute[189491]: 2025-12-01 09:44:48.524 189495 DEBUG oslo_concurrency.processutils [None req-18ff347c-05ba-43e6-8aaf-c7477c13626e 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/bfffb0fe9ffb8885e11e7a8e92aeafe5ed4e87fd,backing_fmt=raw /var/lib/nova/instances/332bb5cd-96b4-43a8-9d53-1d889d5e2df8/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:44:48 compute-0 nova_compute[189491]: 2025-12-01 09:44:48.685 189495 DEBUG oslo_concurrency.processutils [None req-18ff347c-05ba-43e6-8aaf-c7477c13626e 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/bfffb0fe9ffb8885e11e7a8e92aeafe5ed4e87fd,backing_fmt=raw /var/lib/nova/instances/332bb5cd-96b4-43a8-9d53-1d889d5e2df8/disk 1073741824" returned: 0 in 0.161s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:44:48 compute-0 nova_compute[189491]: 2025-12-01 09:44:48.687 189495 DEBUG oslo_concurrency.lockutils [None req-18ff347c-05ba-43e6-8aaf-c7477c13626e 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] Lock "bfffb0fe9ffb8885e11e7a8e92aeafe5ed4e87fd" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.242s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:44:48 compute-0 nova_compute[189491]: 2025-12-01 09:44:48.688 189495 DEBUG oslo_concurrency.processutils [None req-18ff347c-05ba-43e6-8aaf-c7477c13626e 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bfffb0fe9ffb8885e11e7a8e92aeafe5ed4e87fd --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:44:48 compute-0 nova_compute[189491]: 2025-12-01 09:44:48.758 189495 DEBUG oslo_concurrency.processutils [None req-18ff347c-05ba-43e6-8aaf-c7477c13626e 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bfffb0fe9ffb8885e11e7a8e92aeafe5ed4e87fd --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:44:48 compute-0 nova_compute[189491]: 2025-12-01 09:44:48.759 189495 DEBUG nova.virt.disk.api [None req-18ff347c-05ba-43e6-8aaf-c7477c13626e 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] Checking if we can resize image /var/lib/nova/instances/332bb5cd-96b4-43a8-9d53-1d889d5e2df8/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Dec  1 09:44:48 compute-0 nova_compute[189491]: 2025-12-01 09:44:48.760 189495 DEBUG oslo_concurrency.processutils [None req-18ff347c-05ba-43e6-8aaf-c7477c13626e 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/332bb5cd-96b4-43a8-9d53-1d889d5e2df8/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:44:48 compute-0 nova_compute[189491]: 2025-12-01 09:44:48.837 189495 DEBUG oslo_concurrency.processutils [None req-18ff347c-05ba-43e6-8aaf-c7477c13626e 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/332bb5cd-96b4-43a8-9d53-1d889d5e2df8/disk --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:44:48 compute-0 nova_compute[189491]: 2025-12-01 09:44:48.841 189495 DEBUG nova.virt.disk.api [None req-18ff347c-05ba-43e6-8aaf-c7477c13626e 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] Cannot resize image /var/lib/nova/instances/332bb5cd-96b4-43a8-9d53-1d889d5e2df8/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Dec  1 09:44:48 compute-0 nova_compute[189491]: 2025-12-01 09:44:48.842 189495 DEBUG nova.objects.instance [None req-18ff347c-05ba-43e6-8aaf-c7477c13626e 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] Lazy-loading 'migration_context' on Instance uuid 332bb5cd-96b4-43a8-9d53-1d889d5e2df8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 09:44:48 compute-0 nova_compute[189491]: 2025-12-01 09:44:48.910 189495 DEBUG nova.virt.libvirt.driver [None req-18ff347c-05ba-43e6-8aaf-c7477c13626e 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] [instance: 332bb5cd-96b4-43a8-9d53-1d889d5e2df8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  1 09:44:48 compute-0 nova_compute[189491]: 2025-12-01 09:44:48.911 189495 DEBUG nova.virt.libvirt.driver [None req-18ff347c-05ba-43e6-8aaf-c7477c13626e 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] [instance: 332bb5cd-96b4-43a8-9d53-1d889d5e2df8] Ensure instance console log exists: /var/lib/nova/instances/332bb5cd-96b4-43a8-9d53-1d889d5e2df8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  1 09:44:48 compute-0 nova_compute[189491]: 2025-12-01 09:44:48.912 189495 DEBUG oslo_concurrency.lockutils [None req-18ff347c-05ba-43e6-8aaf-c7477c13626e 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:44:48 compute-0 nova_compute[189491]: 2025-12-01 09:44:48.913 189495 DEBUG oslo_concurrency.lockutils [None req-18ff347c-05ba-43e6-8aaf-c7477c13626e 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:44:48 compute-0 nova_compute[189491]: 2025-12-01 09:44:48.914 189495 DEBUG oslo_concurrency.lockutils [None req-18ff347c-05ba-43e6-8aaf-c7477c13626e 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:44:50 compute-0 nova_compute[189491]: 2025-12-01 09:44:50.817 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:44:50 compute-0 nova_compute[189491]: 2025-12-01 09:44:50.819 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:44:50 compute-0 nova_compute[189491]: 2025-12-01 09:44:50.819 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:44:50 compute-0 nova_compute[189491]: 2025-12-01 09:44:50.820 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:44:50 compute-0 nova_compute[189491]: 2025-12-01 09:44:50.820 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 09:44:51 compute-0 nova_compute[189491]: 2025-12-01 09:44:51.336 189495 DEBUG nova.network.neutron [None req-18ff347c-05ba-43e6-8aaf-c7477c13626e 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] [instance: 332bb5cd-96b4-43a8-9d53-1d889d5e2df8] Successfully created port: 39057be4-bfdf-4611-a03e-05cf570b079d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  1 09:44:52 compute-0 nova_compute[189491]: 2025-12-01 09:44:52.136 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:44:52 compute-0 podman[253546]: 2025-12-01 09:44:52.697512412 +0000 UTC m=+0.072313557 container health_status 6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  1 09:44:52 compute-0 nova_compute[189491]: 2025-12-01 09:44:52.715 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:44:52 compute-0 nova_compute[189491]: 2025-12-01 09:44:52.716 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:44:52 compute-0 podman[253547]: 2025-12-01 09:44:52.739374995 +0000 UTC m=+0.108774998 container health_status ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm)
Dec  1 09:44:53 compute-0 nova_compute[189491]: 2025-12-01 09:44:53.151 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:44:53 compute-0 nova_compute[189491]: 2025-12-01 09:44:53.908 189495 DEBUG nova.network.neutron [None req-18ff347c-05ba-43e6-8aaf-c7477c13626e 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] [instance: 332bb5cd-96b4-43a8-9d53-1d889d5e2df8] Successfully updated port: 39057be4-bfdf-4611-a03e-05cf570b079d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  1 09:44:53 compute-0 nova_compute[189491]: 2025-12-01 09:44:53.952 189495 DEBUG oslo_concurrency.lockutils [None req-18ff347c-05ba-43e6-8aaf-c7477c13626e 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] Acquiring lock "refresh_cache-332bb5cd-96b4-43a8-9d53-1d889d5e2df8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 09:44:53 compute-0 nova_compute[189491]: 2025-12-01 09:44:53.953 189495 DEBUG oslo_concurrency.lockutils [None req-18ff347c-05ba-43e6-8aaf-c7477c13626e 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] Acquired lock "refresh_cache-332bb5cd-96b4-43a8-9d53-1d889d5e2df8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 09:44:53 compute-0 nova_compute[189491]: 2025-12-01 09:44:53.953 189495 DEBUG nova.network.neutron [None req-18ff347c-05ba-43e6-8aaf-c7477c13626e 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] [instance: 332bb5cd-96b4-43a8-9d53-1d889d5e2df8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  1 09:44:54 compute-0 nova_compute[189491]: 2025-12-01 09:44:54.133 189495 DEBUG nova.compute.manager [req-45b3fb72-d141-4d1b-9c91-f2bdd6cc606f req-46c4b68e-60cf-4285-8cd6-c61bca5143f4 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 332bb5cd-96b4-43a8-9d53-1d889d5e2df8] Received event network-changed-39057be4-bfdf-4611-a03e-05cf570b079d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 09:44:54 compute-0 nova_compute[189491]: 2025-12-01 09:44:54.134 189495 DEBUG nova.compute.manager [req-45b3fb72-d141-4d1b-9c91-f2bdd6cc606f req-46c4b68e-60cf-4285-8cd6-c61bca5143f4 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 332bb5cd-96b4-43a8-9d53-1d889d5e2df8] Refreshing instance network info cache due to event network-changed-39057be4-bfdf-4611-a03e-05cf570b079d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  1 09:44:54 compute-0 nova_compute[189491]: 2025-12-01 09:44:54.135 189495 DEBUG oslo_concurrency.lockutils [req-45b3fb72-d141-4d1b-9c91-f2bdd6cc606f req-46c4b68e-60cf-4285-8cd6-c61bca5143f4 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Acquiring lock "refresh_cache-332bb5cd-96b4-43a8-9d53-1d889d5e2df8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 09:44:54 compute-0 nova_compute[189491]: 2025-12-01 09:44:54.159 189495 DEBUG nova.network.neutron [None req-18ff347c-05ba-43e6-8aaf-c7477c13626e 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] [instance: 332bb5cd-96b4-43a8-9d53-1d889d5e2df8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  1 09:44:55 compute-0 nova_compute[189491]: 2025-12-01 09:44:55.754 189495 DEBUG nova.network.neutron [None req-18ff347c-05ba-43e6-8aaf-c7477c13626e 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] [instance: 332bb5cd-96b4-43a8-9d53-1d889d5e2df8] Updating instance_info_cache with network_info: [{"id": "39057be4-bfdf-4611-a03e-05cf570b079d", "address": "fa:16:3e:64:29:3b", "network": {"id": "00607b38-c4af-4481-a204-66b72a06ac7e", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1266600030-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "104bf2f5f6f1439e9fc460940d474ff7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap39057be4-bf", "ovs_interfaceid": "39057be4-bfdf-4611-a03e-05cf570b079d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 09:44:55 compute-0 nova_compute[189491]: 2025-12-01 09:44:55.776 189495 DEBUG oslo_concurrency.lockutils [None req-18ff347c-05ba-43e6-8aaf-c7477c13626e 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] Releasing lock "refresh_cache-332bb5cd-96b4-43a8-9d53-1d889d5e2df8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 09:44:55 compute-0 nova_compute[189491]: 2025-12-01 09:44:55.777 189495 DEBUG nova.compute.manager [None req-18ff347c-05ba-43e6-8aaf-c7477c13626e 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] [instance: 332bb5cd-96b4-43a8-9d53-1d889d5e2df8] Instance network_info: |[{"id": "39057be4-bfdf-4611-a03e-05cf570b079d", "address": "fa:16:3e:64:29:3b", "network": {"id": "00607b38-c4af-4481-a204-66b72a06ac7e", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1266600030-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "104bf2f5f6f1439e9fc460940d474ff7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap39057be4-bf", "ovs_interfaceid": "39057be4-bfdf-4611-a03e-05cf570b079d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  1 09:44:55 compute-0 nova_compute[189491]: 2025-12-01 09:44:55.778 189495 DEBUG oslo_concurrency.lockutils [req-45b3fb72-d141-4d1b-9c91-f2bdd6cc606f req-46c4b68e-60cf-4285-8cd6-c61bca5143f4 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Acquired lock "refresh_cache-332bb5cd-96b4-43a8-9d53-1d889d5e2df8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 09:44:55 compute-0 nova_compute[189491]: 2025-12-01 09:44:55.779 189495 DEBUG nova.network.neutron [req-45b3fb72-d141-4d1b-9c91-f2bdd6cc606f req-46c4b68e-60cf-4285-8cd6-c61bca5143f4 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 332bb5cd-96b4-43a8-9d53-1d889d5e2df8] Refreshing network info cache for port 39057be4-bfdf-4611-a03e-05cf570b079d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  1 09:44:55 compute-0 nova_compute[189491]: 2025-12-01 09:44:55.782 189495 DEBUG nova.virt.libvirt.driver [None req-18ff347c-05ba-43e6-8aaf-c7477c13626e 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] [instance: 332bb5cd-96b4-43a8-9d53-1d889d5e2df8] Start _get_guest_xml network_info=[{"id": "39057be4-bfdf-4611-a03e-05cf570b079d", "address": "fa:16:3e:64:29:3b", "network": {"id": "00607b38-c4af-4481-a204-66b72a06ac7e", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1266600030-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "104bf2f5f6f1439e9fc460940d474ff7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap39057be4-bf", "ovs_interfaceid": "39057be4-bfdf-4611-a03e-05cf570b079d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-01T09:41:33Z,direct_url=<?>,disk_format='qcow2',id=7ddeffd1-d06f-4a46-9e41-114974daa90e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='fac95b8a995a4174bfa966a8d9d9aa01',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-01T09:41:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'guest_format': None, 'encryption_options': None, 'device_type': 'disk', 'encryption_secret_uuid': None, 'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'image_id': '7ddeffd1-d06f-4a46-9e41-114974daa90e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  1 09:44:55 compute-0 nova_compute[189491]: 2025-12-01 09:44:55.791 189495 WARNING nova.virt.libvirt.driver [None req-18ff347c-05ba-43e6-8aaf-c7477c13626e 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 09:44:55 compute-0 nova_compute[189491]: 2025-12-01 09:44:55.800 189495 DEBUG nova.virt.libvirt.host [None req-18ff347c-05ba-43e6-8aaf-c7477c13626e 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  1 09:44:55 compute-0 nova_compute[189491]: 2025-12-01 09:44:55.801 189495 DEBUG nova.virt.libvirt.host [None req-18ff347c-05ba-43e6-8aaf-c7477c13626e 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  1 09:44:55 compute-0 nova_compute[189491]: 2025-12-01 09:44:55.806 189495 DEBUG nova.virt.libvirt.host [None req-18ff347c-05ba-43e6-8aaf-c7477c13626e 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  1 09:44:55 compute-0 nova_compute[189491]: 2025-12-01 09:44:55.807 189495 DEBUG nova.virt.libvirt.host [None req-18ff347c-05ba-43e6-8aaf-c7477c13626e 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  1 09:44:55 compute-0 nova_compute[189491]: 2025-12-01 09:44:55.808 189495 DEBUG nova.virt.libvirt.driver [None req-18ff347c-05ba-43e6-8aaf-c7477c13626e 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  1 09:44:55 compute-0 nova_compute[189491]: 2025-12-01 09:44:55.808 189495 DEBUG nova.virt.hardware [None req-18ff347c-05ba-43e6-8aaf-c7477c13626e 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-01T09:41:32Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='422f041c-a187-4aa2-8167-37f3eb0e89c2',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-01T09:41:33Z,direct_url=<?>,disk_format='qcow2',id=7ddeffd1-d06f-4a46-9e41-114974daa90e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='fac95b8a995a4174bfa966a8d9d9aa01',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-01T09:41:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  1 09:44:55 compute-0 nova_compute[189491]: 2025-12-01 09:44:55.809 189495 DEBUG nova.virt.hardware [None req-18ff347c-05ba-43e6-8aaf-c7477c13626e 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  1 09:44:55 compute-0 nova_compute[189491]: 2025-12-01 09:44:55.810 189495 DEBUG nova.virt.hardware [None req-18ff347c-05ba-43e6-8aaf-c7477c13626e 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  1 09:44:55 compute-0 nova_compute[189491]: 2025-12-01 09:44:55.811 189495 DEBUG nova.virt.hardware [None req-18ff347c-05ba-43e6-8aaf-c7477c13626e 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  1 09:44:55 compute-0 nova_compute[189491]: 2025-12-01 09:44:55.811 189495 DEBUG nova.virt.hardware [None req-18ff347c-05ba-43e6-8aaf-c7477c13626e 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  1 09:44:55 compute-0 nova_compute[189491]: 2025-12-01 09:44:55.812 189495 DEBUG nova.virt.hardware [None req-18ff347c-05ba-43e6-8aaf-c7477c13626e 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  1 09:44:55 compute-0 nova_compute[189491]: 2025-12-01 09:44:55.813 189495 DEBUG nova.virt.hardware [None req-18ff347c-05ba-43e6-8aaf-c7477c13626e 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  1 09:44:55 compute-0 nova_compute[189491]: 2025-12-01 09:44:55.813 189495 DEBUG nova.virt.hardware [None req-18ff347c-05ba-43e6-8aaf-c7477c13626e 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  1 09:44:55 compute-0 nova_compute[189491]: 2025-12-01 09:44:55.814 189495 DEBUG nova.virt.hardware [None req-18ff347c-05ba-43e6-8aaf-c7477c13626e 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  1 09:44:55 compute-0 nova_compute[189491]: 2025-12-01 09:44:55.815 189495 DEBUG nova.virt.hardware [None req-18ff347c-05ba-43e6-8aaf-c7477c13626e 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  1 09:44:55 compute-0 nova_compute[189491]: 2025-12-01 09:44:55.815 189495 DEBUG nova.virt.hardware [None req-18ff347c-05ba-43e6-8aaf-c7477c13626e 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  1 09:44:55 compute-0 nova_compute[189491]: 2025-12-01 09:44:55.820 189495 DEBUG nova.virt.libvirt.vif [None req-18ff347c-05ba-43e6-8aaf-c7477c13626e 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-01T09:44:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-1882758829',display_name='tempest-ServersTestManualDisk-server-1882758829',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-1882758829',id=10,image_ref='7ddeffd1-d06f-4a46-9e41-114974daa90e',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEBun4wV8PqH+paF8mL/kjAoF3csnHUjxB9+OJjPrJ9zvgm5mf5drjzi5QsaL5k8m7FaaWkmzV9DwtcJrOsdFYWS8HcOG+BcZQThXRdW9XzhSoxmfPyEiSufuVm2QUPnEQ==',key_name='tempest-keypair-1598602579',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='104bf2f5f6f1439e9fc460940d474ff7',ramdisk_id='',reservation_id='r-okhxk7ta',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7ddeffd1-d06f-4a46-9e41-114974daa90e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestManualDisk-192501260',owner_user_name='tempest-ServersTestManualDisk-192501260-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-01T09:44:47Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='22ae22ecd4ce4774b704b3aa723962b8',uuid=332bb5cd-96b4-43a8-9d53-1d889d5e2df8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "39057be4-bfdf-4611-a03e-05cf570b079d", "address": "fa:16:3e:64:29:3b", "network": {"id": "00607b38-c4af-4481-a204-66b72a06ac7e", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1266600030-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "104bf2f5f6f1439e9fc460940d474ff7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap39057be4-bf", "ovs_interfaceid": "39057be4-bfdf-4611-a03e-05cf570b079d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  1 09:44:55 compute-0 nova_compute[189491]: 2025-12-01 09:44:55.821 189495 DEBUG nova.network.os_vif_util [None req-18ff347c-05ba-43e6-8aaf-c7477c13626e 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] Converting VIF {"id": "39057be4-bfdf-4611-a03e-05cf570b079d", "address": "fa:16:3e:64:29:3b", "network": {"id": "00607b38-c4af-4481-a204-66b72a06ac7e", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1266600030-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "104bf2f5f6f1439e9fc460940d474ff7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap39057be4-bf", "ovs_interfaceid": "39057be4-bfdf-4611-a03e-05cf570b079d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 09:44:55 compute-0 nova_compute[189491]: 2025-12-01 09:44:55.822 189495 DEBUG nova.network.os_vif_util [None req-18ff347c-05ba-43e6-8aaf-c7477c13626e 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:64:29:3b,bridge_name='br-int',has_traffic_filtering=True,id=39057be4-bfdf-4611-a03e-05cf570b079d,network=Network(00607b38-c4af-4481-a204-66b72a06ac7e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap39057be4-bf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 09:44:55 compute-0 nova_compute[189491]: 2025-12-01 09:44:55.824 189495 DEBUG nova.objects.instance [None req-18ff347c-05ba-43e6-8aaf-c7477c13626e 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 332bb5cd-96b4-43a8-9d53-1d889d5e2df8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 09:44:55 compute-0 nova_compute[189491]: 2025-12-01 09:44:55.848 189495 DEBUG nova.virt.libvirt.driver [None req-18ff347c-05ba-43e6-8aaf-c7477c13626e 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] [instance: 332bb5cd-96b4-43a8-9d53-1d889d5e2df8] End _get_guest_xml xml=<domain type="kvm">
Dec  1 09:44:55 compute-0 nova_compute[189491]:  <uuid>332bb5cd-96b4-43a8-9d53-1d889d5e2df8</uuid>
Dec  1 09:44:55 compute-0 nova_compute[189491]:  <name>instance-0000000a</name>
Dec  1 09:44:55 compute-0 nova_compute[189491]:  <memory>131072</memory>
Dec  1 09:44:55 compute-0 nova_compute[189491]:  <vcpu>1</vcpu>
Dec  1 09:44:55 compute-0 nova_compute[189491]:  <metadata>
Dec  1 09:44:55 compute-0 nova_compute[189491]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  1 09:44:55 compute-0 nova_compute[189491]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  1 09:44:55 compute-0 nova_compute[189491]:      <nova:name>tempest-ServersTestManualDisk-server-1882758829</nova:name>
Dec  1 09:44:55 compute-0 nova_compute[189491]:      <nova:creationTime>2025-12-01 09:44:55</nova:creationTime>
Dec  1 09:44:55 compute-0 nova_compute[189491]:      <nova:flavor name="m1.nano">
Dec  1 09:44:55 compute-0 nova_compute[189491]:        <nova:memory>128</nova:memory>
Dec  1 09:44:55 compute-0 nova_compute[189491]:        <nova:disk>1</nova:disk>
Dec  1 09:44:55 compute-0 nova_compute[189491]:        <nova:swap>0</nova:swap>
Dec  1 09:44:55 compute-0 nova_compute[189491]:        <nova:ephemeral>0</nova:ephemeral>
Dec  1 09:44:55 compute-0 nova_compute[189491]:        <nova:vcpus>1</nova:vcpus>
Dec  1 09:44:55 compute-0 nova_compute[189491]:      </nova:flavor>
Dec  1 09:44:55 compute-0 nova_compute[189491]:      <nova:owner>
Dec  1 09:44:55 compute-0 nova_compute[189491]:        <nova:user uuid="22ae22ecd4ce4774b704b3aa723962b8">tempest-ServersTestManualDisk-192501260-project-member</nova:user>
Dec  1 09:44:55 compute-0 nova_compute[189491]:        <nova:project uuid="104bf2f5f6f1439e9fc460940d474ff7">tempest-ServersTestManualDisk-192501260</nova:project>
Dec  1 09:44:55 compute-0 nova_compute[189491]:      </nova:owner>
Dec  1 09:44:55 compute-0 nova_compute[189491]:      <nova:root type="image" uuid="7ddeffd1-d06f-4a46-9e41-114974daa90e"/>
Dec  1 09:44:55 compute-0 nova_compute[189491]:      <nova:ports>
Dec  1 09:44:55 compute-0 nova_compute[189491]:        <nova:port uuid="39057be4-bfdf-4611-a03e-05cf570b079d">
Dec  1 09:44:55 compute-0 nova_compute[189491]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Dec  1 09:44:55 compute-0 nova_compute[189491]:        </nova:port>
Dec  1 09:44:55 compute-0 nova_compute[189491]:      </nova:ports>
Dec  1 09:44:55 compute-0 nova_compute[189491]:    </nova:instance>
Dec  1 09:44:55 compute-0 nova_compute[189491]:  </metadata>
Dec  1 09:44:55 compute-0 nova_compute[189491]:  <sysinfo type="smbios">
Dec  1 09:44:55 compute-0 nova_compute[189491]:    <system>
Dec  1 09:44:55 compute-0 nova_compute[189491]:      <entry name="manufacturer">RDO</entry>
Dec  1 09:44:55 compute-0 nova_compute[189491]:      <entry name="product">OpenStack Compute</entry>
Dec  1 09:44:55 compute-0 nova_compute[189491]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  1 09:44:55 compute-0 nova_compute[189491]:      <entry name="serial">332bb5cd-96b4-43a8-9d53-1d889d5e2df8</entry>
Dec  1 09:44:55 compute-0 nova_compute[189491]:      <entry name="uuid">332bb5cd-96b4-43a8-9d53-1d889d5e2df8</entry>
Dec  1 09:44:55 compute-0 nova_compute[189491]:      <entry name="family">Virtual Machine</entry>
Dec  1 09:44:55 compute-0 nova_compute[189491]:    </system>
Dec  1 09:44:55 compute-0 nova_compute[189491]:  </sysinfo>
Dec  1 09:44:55 compute-0 nova_compute[189491]:  <os>
Dec  1 09:44:55 compute-0 nova_compute[189491]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  1 09:44:55 compute-0 nova_compute[189491]:    <boot dev="hd"/>
Dec  1 09:44:55 compute-0 nova_compute[189491]:    <smbios mode="sysinfo"/>
Dec  1 09:44:55 compute-0 nova_compute[189491]:  </os>
Dec  1 09:44:55 compute-0 nova_compute[189491]:  <features>
Dec  1 09:44:55 compute-0 nova_compute[189491]:    <acpi/>
Dec  1 09:44:55 compute-0 nova_compute[189491]:    <apic/>
Dec  1 09:44:55 compute-0 nova_compute[189491]:    <vmcoreinfo/>
Dec  1 09:44:55 compute-0 nova_compute[189491]:  </features>
Dec  1 09:44:55 compute-0 nova_compute[189491]:  <clock offset="utc">
Dec  1 09:44:55 compute-0 nova_compute[189491]:    <timer name="pit" tickpolicy="delay"/>
Dec  1 09:44:55 compute-0 nova_compute[189491]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  1 09:44:55 compute-0 nova_compute[189491]:    <timer name="hpet" present="no"/>
Dec  1 09:44:55 compute-0 nova_compute[189491]:  </clock>
Dec  1 09:44:55 compute-0 nova_compute[189491]:  <cpu mode="host-model" match="exact">
Dec  1 09:44:55 compute-0 nova_compute[189491]:    <topology sockets="1" cores="1" threads="1"/>
Dec  1 09:44:55 compute-0 nova_compute[189491]:  </cpu>
Dec  1 09:44:55 compute-0 nova_compute[189491]:  <devices>
Dec  1 09:44:55 compute-0 nova_compute[189491]:    <disk type="file" device="disk">
Dec  1 09:44:55 compute-0 nova_compute[189491]:      <driver name="qemu" type="qcow2" cache="none"/>
Dec  1 09:44:55 compute-0 nova_compute[189491]:      <source file="/var/lib/nova/instances/332bb5cd-96b4-43a8-9d53-1d889d5e2df8/disk"/>
Dec  1 09:44:55 compute-0 nova_compute[189491]:      <target dev="vda" bus="virtio"/>
Dec  1 09:44:55 compute-0 nova_compute[189491]:    </disk>
Dec  1 09:44:55 compute-0 nova_compute[189491]:    <disk type="file" device="cdrom">
Dec  1 09:44:55 compute-0 nova_compute[189491]:      <driver name="qemu" type="raw" cache="none"/>
Dec  1 09:44:55 compute-0 nova_compute[189491]:      <source file="/var/lib/nova/instances/332bb5cd-96b4-43a8-9d53-1d889d5e2df8/disk.config"/>
Dec  1 09:44:55 compute-0 nova_compute[189491]:      <target dev="sda" bus="sata"/>
Dec  1 09:44:55 compute-0 nova_compute[189491]:    </disk>
Dec  1 09:44:55 compute-0 nova_compute[189491]:    <interface type="ethernet">
Dec  1 09:44:55 compute-0 nova_compute[189491]:      <mac address="fa:16:3e:64:29:3b"/>
Dec  1 09:44:55 compute-0 nova_compute[189491]:      <model type="virtio"/>
Dec  1 09:44:55 compute-0 nova_compute[189491]:      <driver name="vhost" rx_queue_size="512"/>
Dec  1 09:44:55 compute-0 nova_compute[189491]:      <mtu size="1442"/>
Dec  1 09:44:55 compute-0 nova_compute[189491]:      <target dev="tap39057be4-bf"/>
Dec  1 09:44:55 compute-0 nova_compute[189491]:    </interface>
Dec  1 09:44:55 compute-0 nova_compute[189491]:    <serial type="pty">
Dec  1 09:44:55 compute-0 nova_compute[189491]:      <log file="/var/lib/nova/instances/332bb5cd-96b4-43a8-9d53-1d889d5e2df8/console.log" append="off"/>
Dec  1 09:44:55 compute-0 nova_compute[189491]:    </serial>
Dec  1 09:44:55 compute-0 nova_compute[189491]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  1 09:44:55 compute-0 nova_compute[189491]:    <video>
Dec  1 09:44:55 compute-0 nova_compute[189491]:      <model type="virtio"/>
Dec  1 09:44:55 compute-0 nova_compute[189491]:    </video>
Dec  1 09:44:55 compute-0 nova_compute[189491]:    <input type="tablet" bus="usb"/>
Dec  1 09:44:55 compute-0 nova_compute[189491]:    <rng model="virtio">
Dec  1 09:44:55 compute-0 nova_compute[189491]:      <backend model="random">/dev/urandom</backend>
Dec  1 09:44:55 compute-0 nova_compute[189491]:    </rng>
Dec  1 09:44:55 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root"/>
Dec  1 09:44:55 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:44:55 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:44:55 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:44:55 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:44:55 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:44:55 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:44:55 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:44:55 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:44:55 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:44:55 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:44:55 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:44:55 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:44:55 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:44:55 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:44:55 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:44:55 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:44:55 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:44:55 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:44:55 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:44:55 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:44:55 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:44:55 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:44:55 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:44:55 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:44:55 compute-0 nova_compute[189491]:    <controller type="usb" index="0"/>
Dec  1 09:44:55 compute-0 nova_compute[189491]:    <memballoon model="virtio">
Dec  1 09:44:55 compute-0 nova_compute[189491]:      <stats period="10"/>
Dec  1 09:44:55 compute-0 nova_compute[189491]:    </memballoon>
Dec  1 09:44:55 compute-0 nova_compute[189491]:  </devices>
Dec  1 09:44:55 compute-0 nova_compute[189491]: </domain>
Dec  1 09:44:55 compute-0 nova_compute[189491]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  1 09:44:55 compute-0 nova_compute[189491]: 2025-12-01 09:44:55.862 189495 DEBUG nova.compute.manager [None req-18ff347c-05ba-43e6-8aaf-c7477c13626e 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] [instance: 332bb5cd-96b4-43a8-9d53-1d889d5e2df8] Preparing to wait for external event network-vif-plugged-39057be4-bfdf-4611-a03e-05cf570b079d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  1 09:44:55 compute-0 nova_compute[189491]: 2025-12-01 09:44:55.862 189495 DEBUG oslo_concurrency.lockutils [None req-18ff347c-05ba-43e6-8aaf-c7477c13626e 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] Acquiring lock "332bb5cd-96b4-43a8-9d53-1d889d5e2df8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:44:55 compute-0 nova_compute[189491]: 2025-12-01 09:44:55.862 189495 DEBUG oslo_concurrency.lockutils [None req-18ff347c-05ba-43e6-8aaf-c7477c13626e 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] Lock "332bb5cd-96b4-43a8-9d53-1d889d5e2df8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:44:55 compute-0 nova_compute[189491]: 2025-12-01 09:44:55.863 189495 DEBUG oslo_concurrency.lockutils [None req-18ff347c-05ba-43e6-8aaf-c7477c13626e 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] Lock "332bb5cd-96b4-43a8-9d53-1d889d5e2df8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:44:55 compute-0 nova_compute[189491]: 2025-12-01 09:44:55.864 189495 DEBUG nova.virt.libvirt.vif [None req-18ff347c-05ba-43e6-8aaf-c7477c13626e 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-01T09:44:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-1882758829',display_name='tempest-ServersTestManualDisk-server-1882758829',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-1882758829',id=10,image_ref='7ddeffd1-d06f-4a46-9e41-114974daa90e',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEBun4wV8PqH+paF8mL/kjAoF3csnHUjxB9+OJjPrJ9zvgm5mf5drjzi5QsaL5k8m7FaaWkmzV9DwtcJrOsdFYWS8HcOG+BcZQThXRdW9XzhSoxmfPyEiSufuVm2QUPnEQ==',key_name='tempest-keypair-1598602579',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='104bf2f5f6f1439e9fc460940d474ff7',ramdisk_id='',reservation_id='r-okhxk7ta',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7ddeffd1-d06f-4a46-9e41-114974daa90e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestManualDisk-192501260',owner_user_name='tempest-ServersTestManualDisk-192501260-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-01T09:44:47Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='22ae22ecd4ce4774b704b3aa723962b8',uuid=332bb5cd-96b4-43a8-9d53-1d889d5e2df8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "39057be4-bfdf-4611-a03e-05cf570b079d", "address": "fa:16:3e:64:29:3b", "network": {"id": "00607b38-c4af-4481-a204-66b72a06ac7e", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1266600030-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "104bf2f5f6f1439e9fc460940d474ff7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap39057be4-bf", "ovs_interfaceid": "39057be4-bfdf-4611-a03e-05cf570b079d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  1 09:44:55 compute-0 nova_compute[189491]: 2025-12-01 09:44:55.864 189495 DEBUG nova.network.os_vif_util [None req-18ff347c-05ba-43e6-8aaf-c7477c13626e 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] Converting VIF {"id": "39057be4-bfdf-4611-a03e-05cf570b079d", "address": "fa:16:3e:64:29:3b", "network": {"id": "00607b38-c4af-4481-a204-66b72a06ac7e", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1266600030-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "104bf2f5f6f1439e9fc460940d474ff7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap39057be4-bf", "ovs_interfaceid": "39057be4-bfdf-4611-a03e-05cf570b079d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 09:44:55 compute-0 nova_compute[189491]: 2025-12-01 09:44:55.865 189495 DEBUG nova.network.os_vif_util [None req-18ff347c-05ba-43e6-8aaf-c7477c13626e 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:64:29:3b,bridge_name='br-int',has_traffic_filtering=True,id=39057be4-bfdf-4611-a03e-05cf570b079d,network=Network(00607b38-c4af-4481-a204-66b72a06ac7e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap39057be4-bf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 09:44:55 compute-0 nova_compute[189491]: 2025-12-01 09:44:55.865 189495 DEBUG os_vif [None req-18ff347c-05ba-43e6-8aaf-c7477c13626e 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:64:29:3b,bridge_name='br-int',has_traffic_filtering=True,id=39057be4-bfdf-4611-a03e-05cf570b079d,network=Network(00607b38-c4af-4481-a204-66b72a06ac7e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap39057be4-bf') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  1 09:44:55 compute-0 nova_compute[189491]: 2025-12-01 09:44:55.866 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:44:55 compute-0 nova_compute[189491]: 2025-12-01 09:44:55.866 189495 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:44:55 compute-0 nova_compute[189491]: 2025-12-01 09:44:55.869 189495 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 09:44:55 compute-0 nova_compute[189491]: 2025-12-01 09:44:55.873 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:44:55 compute-0 nova_compute[189491]: 2025-12-01 09:44:55.874 189495 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap39057be4-bf, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:44:55 compute-0 nova_compute[189491]: 2025-12-01 09:44:55.875 189495 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap39057be4-bf, col_values=(('external_ids', {'iface-id': '39057be4-bfdf-4611-a03e-05cf570b079d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:64:29:3b', 'vm-uuid': '332bb5cd-96b4-43a8-9d53-1d889d5e2df8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:44:55 compute-0 nova_compute[189491]: 2025-12-01 09:44:55.877 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:44:55 compute-0 nova_compute[189491]: 2025-12-01 09:44:55.879 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  1 09:44:55 compute-0 NetworkManager[56318]: <info>  [1764582295.8821] manager: (tap39057be4-bf): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/55)
Dec  1 09:44:55 compute-0 nova_compute[189491]: 2025-12-01 09:44:55.888 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:44:55 compute-0 nova_compute[189491]: 2025-12-01 09:44:55.889 189495 INFO os_vif [None req-18ff347c-05ba-43e6-8aaf-c7477c13626e 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:64:29:3b,bridge_name='br-int',has_traffic_filtering=True,id=39057be4-bfdf-4611-a03e-05cf570b079d,network=Network(00607b38-c4af-4481-a204-66b72a06ac7e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap39057be4-bf')#033[00m
Dec  1 09:44:55 compute-0 nova_compute[189491]: 2025-12-01 09:44:55.953 189495 DEBUG nova.virt.libvirt.driver [None req-18ff347c-05ba-43e6-8aaf-c7477c13626e 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  1 09:44:55 compute-0 nova_compute[189491]: 2025-12-01 09:44:55.954 189495 DEBUG nova.virt.libvirt.driver [None req-18ff347c-05ba-43e6-8aaf-c7477c13626e 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  1 09:44:55 compute-0 nova_compute[189491]: 2025-12-01 09:44:55.955 189495 DEBUG nova.virt.libvirt.driver [None req-18ff347c-05ba-43e6-8aaf-c7477c13626e 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] No VIF found with MAC fa:16:3e:64:29:3b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  1 09:44:55 compute-0 nova_compute[189491]: 2025-12-01 09:44:55.956 189495 INFO nova.virt.libvirt.driver [None req-18ff347c-05ba-43e6-8aaf-c7477c13626e 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] [instance: 332bb5cd-96b4-43a8-9d53-1d889d5e2df8] Using config drive#033[00m
Dec  1 09:44:56 compute-0 nova_compute[189491]: 2025-12-01 09:44:56.916 189495 INFO nova.virt.libvirt.driver [None req-18ff347c-05ba-43e6-8aaf-c7477c13626e 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] [instance: 332bb5cd-96b4-43a8-9d53-1d889d5e2df8] Creating config drive at /var/lib/nova/instances/332bb5cd-96b4-43a8-9d53-1d889d5e2df8/disk.config#033[00m
Dec  1 09:44:56 compute-0 nova_compute[189491]: 2025-12-01 09:44:56.924 189495 DEBUG oslo_concurrency.processutils [None req-18ff347c-05ba-43e6-8aaf-c7477c13626e 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/332bb5cd-96b4-43a8-9d53-1d889d5e2df8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0r7a0611 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:44:57 compute-0 nova_compute[189491]: 2025-12-01 09:44:57.049 189495 DEBUG oslo_concurrency.processutils [None req-18ff347c-05ba-43e6-8aaf-c7477c13626e 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/332bb5cd-96b4-43a8-9d53-1d889d5e2df8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0r7a0611" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:44:57 compute-0 NetworkManager[56318]: <info>  [1764582297.1221] manager: (tap39057be4-bf): new Tun device (/org/freedesktop/NetworkManager/Devices/56)
Dec  1 09:44:57 compute-0 kernel: tap39057be4-bf: entered promiscuous mode
Dec  1 09:44:57 compute-0 nova_compute[189491]: 2025-12-01 09:44:57.128 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:44:57 compute-0 ovn_controller[97794]: 2025-12-01T09:44:57Z|00110|binding|INFO|Claiming lport 39057be4-bfdf-4611-a03e-05cf570b079d for this chassis.
Dec  1 09:44:57 compute-0 ovn_controller[97794]: 2025-12-01T09:44:57Z|00111|binding|INFO|39057be4-bfdf-4611-a03e-05cf570b079d: Claiming fa:16:3e:64:29:3b 10.100.0.9
Dec  1 09:44:57 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:44:57.144 106659 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:64:29:3b 10.100.0.9'], port_security=['fa:16:3e:64:29:3b 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '332bb5cd-96b4-43a8-9d53-1d889d5e2df8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-00607b38-c4af-4481-a204-66b72a06ac7e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '104bf2f5f6f1439e9fc460940d474ff7', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3af9e5be-2f19-4cbe-93f7-131a0ec5f44d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3c41e6de-1e1f-49da-9091-402137f073fd, chassis=[<ovs.db.idl.Row object at 0x7ff7f12c3670>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff7f12c3670>], logical_port=39057be4-bfdf-4611-a03e-05cf570b079d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 09:44:57 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:44:57.147 106659 INFO neutron.agent.ovn.metadata.agent [-] Port 39057be4-bfdf-4611-a03e-05cf570b079d in datapath 00607b38-c4af-4481-a204-66b72a06ac7e bound to our chassis#033[00m
Dec  1 09:44:57 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:44:57.148 106659 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 00607b38-c4af-4481-a204-66b72a06ac7e#033[00m
Dec  1 09:44:57 compute-0 nova_compute[189491]: 2025-12-01 09:44:57.157 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:44:57 compute-0 nova_compute[189491]: 2025-12-01 09:44:57.159 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:44:57 compute-0 ovn_controller[97794]: 2025-12-01T09:44:57Z|00112|binding|INFO|Setting lport 39057be4-bfdf-4611-a03e-05cf570b079d up in Southbound
Dec  1 09:44:57 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:44:57.163 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[2a6127d2-40d1-4409-8db4-8305126386cc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:44:57 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:44:57.165 106659 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap00607b38-c1 in ovnmeta-00607b38-c4af-4481-a204-66b72a06ac7e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  1 09:44:57 compute-0 ovn_controller[97794]: 2025-12-01T09:44:57Z|00113|binding|INFO|Setting lport 39057be4-bfdf-4611-a03e-05cf570b079d ovn-installed in OVS
Dec  1 09:44:57 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:44:57.169 239818 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap00607b38-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  1 09:44:57 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:44:57.169 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[5c3be8ea-088d-4e9d-ad66-0725c2222faa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:44:57 compute-0 nova_compute[189491]: 2025-12-01 09:44:57.168 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:44:57 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:44:57.173 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[dd69764c-9343-4d82-9354-0c9dc905528e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:44:57 compute-0 systemd-udevd[253610]: Network interface NamePolicy= disabled on kernel command line.
Dec  1 09:44:57 compute-0 systemd-machined[155812]: New machine qemu-11-instance-0000000a.
Dec  1 09:44:57 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:44:57.186 106797 DEBUG oslo.privsep.daemon [-] privsep: reply[8fc2f520-9cbc-45f8-93b3-f9fa70ad8996]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:44:57 compute-0 NetworkManager[56318]: <info>  [1764582297.1929] device (tap39057be4-bf): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  1 09:44:57 compute-0 NetworkManager[56318]: <info>  [1764582297.1936] device (tap39057be4-bf): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  1 09:44:57 compute-0 systemd[1]: Started Virtual Machine qemu-11-instance-0000000a.
Dec  1 09:44:57 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:44:57.216 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[3c68e918-b2fa-41af-bd4d-79c0917b2872]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:44:57 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:44:57.248 239843 DEBUG oslo.privsep.daemon [-] privsep: reply[dadafdad-98fc-486f-828d-ca484bbca7cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:44:57 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:44:57.260 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[27df75a2-9b17-43b8-8d8b-a5519bcfd076]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:44:57 compute-0 systemd-udevd[253613]: Network interface NamePolicy= disabled on kernel command line.
Dec  1 09:44:57 compute-0 NetworkManager[56318]: <info>  [1764582297.2619] manager: (tap00607b38-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/57)
Dec  1 09:44:57 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:44:57.298 239843 DEBUG oslo.privsep.daemon [-] privsep: reply[f1a51933-53a3-42c0-b607-4f4c4f7141f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:44:57 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:44:57.303 239843 DEBUG oslo.privsep.daemon [-] privsep: reply[95ee6436-18b6-464c-ac43-d9799ca375d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:44:57 compute-0 NetworkManager[56318]: <info>  [1764582297.3344] device (tap00607b38-c0): carrier: link connected
Dec  1 09:44:57 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:44:57.356 239843 DEBUG oslo.privsep.daemon [-] privsep: reply[5cd5a91f-7747-4589-b1d0-da8f8a2afb5d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:44:57 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:44:57.384 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[d10a1fff-5f86-409f-8800-62f4b859911c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap00607b38-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0c:1f:28'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 34], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 553702, 'reachable_time': 15208, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 253642, 'error': None, 'target': 'ovnmeta-00607b38-c4af-4481-a204-66b72a06ac7e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:44:57 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:44:57.404 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[579f7f51-7073-4ede-ae8e-556f04365148]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe0c:1f28'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 553702, 'tstamp': 553702}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 253643, 'error': None, 'target': 'ovnmeta-00607b38-c4af-4481-a204-66b72a06ac7e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:44:57 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:44:57.426 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[c7a69c01-2ebb-4236-bce8-867265e9bc50]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap00607b38-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0c:1f:28'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 34], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 553702, 'reachable_time': 15208, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 253644, 'error': None, 'target': 'ovnmeta-00607b38-c4af-4481-a204-66b72a06ac7e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:44:57 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:44:57.471 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[be48b19b-367f-4130-ade2-f9a21fbf80a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:44:57 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:44:57.535 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[b20e9254-b70a-45d6-be6d-5791ffa8e82c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:44:57 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:44:57.537 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap00607b38-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:44:57 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:44:57.538 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 09:44:57 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:44:57.539 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap00607b38-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:44:57 compute-0 nova_compute[189491]: 2025-12-01 09:44:57.542 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:44:57 compute-0 NetworkManager[56318]: <info>  [1764582297.5437] manager: (tap00607b38-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/58)
Dec  1 09:44:57 compute-0 kernel: tap00607b38-c0: entered promiscuous mode
Dec  1 09:44:57 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:44:57.547 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap00607b38-c0, col_values=(('external_ids', {'iface-id': 'dfde034d-1bb5-4328-b92a-74a56d35a655'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:44:57 compute-0 ovn_controller[97794]: 2025-12-01T09:44:57Z|00114|binding|INFO|Releasing lport dfde034d-1bb5-4328-b92a-74a56d35a655 from this chassis (sb_readonly=0)
Dec  1 09:44:57 compute-0 nova_compute[189491]: 2025-12-01 09:44:57.552 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:44:57 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:44:57.554 106659 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/00607b38-c4af-4481-a204-66b72a06ac7e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/00607b38-c4af-4481-a204-66b72a06ac7e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  1 09:44:57 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:44:57.556 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[e227edad-25e9-4e46-bda6-2dfc038f5e46]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:44:57 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:44:57.559 106659 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  1 09:44:57 compute-0 ovn_metadata_agent[106654]: global
Dec  1 09:44:57 compute-0 ovn_metadata_agent[106654]:    log         /dev/log local0 debug
Dec  1 09:44:57 compute-0 ovn_metadata_agent[106654]:    log-tag     haproxy-metadata-proxy-00607b38-c4af-4481-a204-66b72a06ac7e
Dec  1 09:44:57 compute-0 ovn_metadata_agent[106654]:    user        root
Dec  1 09:44:57 compute-0 ovn_metadata_agent[106654]:    group       root
Dec  1 09:44:57 compute-0 ovn_metadata_agent[106654]:    maxconn     1024
Dec  1 09:44:57 compute-0 ovn_metadata_agent[106654]:    pidfile     /var/lib/neutron/external/pids/00607b38-c4af-4481-a204-66b72a06ac7e.pid.haproxy
Dec  1 09:44:57 compute-0 ovn_metadata_agent[106654]:    daemon
Dec  1 09:44:57 compute-0 ovn_metadata_agent[106654]: 
Dec  1 09:44:57 compute-0 ovn_metadata_agent[106654]: defaults
Dec  1 09:44:57 compute-0 ovn_metadata_agent[106654]:    log global
Dec  1 09:44:57 compute-0 ovn_metadata_agent[106654]:    mode http
Dec  1 09:44:57 compute-0 ovn_metadata_agent[106654]:    option httplog
Dec  1 09:44:57 compute-0 ovn_metadata_agent[106654]:    option dontlognull
Dec  1 09:44:57 compute-0 ovn_metadata_agent[106654]:    option http-server-close
Dec  1 09:44:57 compute-0 ovn_metadata_agent[106654]:    option forwardfor
Dec  1 09:44:57 compute-0 ovn_metadata_agent[106654]:    retries                 3
Dec  1 09:44:57 compute-0 ovn_metadata_agent[106654]:    timeout http-request    30s
Dec  1 09:44:57 compute-0 ovn_metadata_agent[106654]:    timeout connect         30s
Dec  1 09:44:57 compute-0 ovn_metadata_agent[106654]:    timeout client          32s
Dec  1 09:44:57 compute-0 ovn_metadata_agent[106654]:    timeout server          32s
Dec  1 09:44:57 compute-0 ovn_metadata_agent[106654]:    timeout http-keep-alive 30s
Dec  1 09:44:57 compute-0 ovn_metadata_agent[106654]: 
Dec  1 09:44:57 compute-0 ovn_metadata_agent[106654]: 
Dec  1 09:44:57 compute-0 ovn_metadata_agent[106654]: listen listener
Dec  1 09:44:57 compute-0 ovn_metadata_agent[106654]:    bind 169.254.169.254:80
Dec  1 09:44:57 compute-0 ovn_metadata_agent[106654]:    server metadata /var/lib/neutron/metadata_proxy
Dec  1 09:44:57 compute-0 ovn_metadata_agent[106654]:    http-request add-header X-OVN-Network-ID 00607b38-c4af-4481-a204-66b72a06ac7e
Dec  1 09:44:57 compute-0 ovn_metadata_agent[106654]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  1 09:44:57 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:44:57.562 106659 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-00607b38-c4af-4481-a204-66b72a06ac7e', 'env', 'PROCESS_TAG=haproxy-00607b38-c4af-4481-a204-66b72a06ac7e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/00607b38-c4af-4481-a204-66b72a06ac7e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  1 09:44:57 compute-0 nova_compute[189491]: 2025-12-01 09:44:57.568 189495 DEBUG nova.virt.driver [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] Emitting event <LifecycleEvent: 1764582297.567545, 332bb5cd-96b4-43a8-9d53-1d889d5e2df8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 09:44:57 compute-0 nova_compute[189491]: 2025-12-01 09:44:57.568 189495 INFO nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: 332bb5cd-96b4-43a8-9d53-1d889d5e2df8] VM Started (Lifecycle Event)#033[00m
Dec  1 09:44:57 compute-0 nova_compute[189491]: 2025-12-01 09:44:57.571 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:44:57 compute-0 nova_compute[189491]: 2025-12-01 09:44:57.586 189495 DEBUG nova.compute.manager [req-42896686-8ba3-4da5-b6fb-6b9c377a4a16 req-08a66ab5-4d17-448e-a69c-130544a73e8b ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 332bb5cd-96b4-43a8-9d53-1d889d5e2df8] Received event network-vif-plugged-39057be4-bfdf-4611-a03e-05cf570b079d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 09:44:57 compute-0 nova_compute[189491]: 2025-12-01 09:44:57.587 189495 DEBUG oslo_concurrency.lockutils [req-42896686-8ba3-4da5-b6fb-6b9c377a4a16 req-08a66ab5-4d17-448e-a69c-130544a73e8b ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Acquiring lock "332bb5cd-96b4-43a8-9d53-1d889d5e2df8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:44:57 compute-0 nova_compute[189491]: 2025-12-01 09:44:57.587 189495 DEBUG oslo_concurrency.lockutils [req-42896686-8ba3-4da5-b6fb-6b9c377a4a16 req-08a66ab5-4d17-448e-a69c-130544a73e8b ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Lock "332bb5cd-96b4-43a8-9d53-1d889d5e2df8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:44:57 compute-0 nova_compute[189491]: 2025-12-01 09:44:57.587 189495 DEBUG oslo_concurrency.lockutils [req-42896686-8ba3-4da5-b6fb-6b9c377a4a16 req-08a66ab5-4d17-448e-a69c-130544a73e8b ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Lock "332bb5cd-96b4-43a8-9d53-1d889d5e2df8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:44:57 compute-0 nova_compute[189491]: 2025-12-01 09:44:57.587 189495 DEBUG nova.compute.manager [req-42896686-8ba3-4da5-b6fb-6b9c377a4a16 req-08a66ab5-4d17-448e-a69c-130544a73e8b ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 332bb5cd-96b4-43a8-9d53-1d889d5e2df8] Processing event network-vif-plugged-39057be4-bfdf-4611-a03e-05cf570b079d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  1 09:44:57 compute-0 nova_compute[189491]: 2025-12-01 09:44:57.588 189495 DEBUG nova.compute.manager [None req-18ff347c-05ba-43e6-8aaf-c7477c13626e 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] [instance: 332bb5cd-96b4-43a8-9d53-1d889d5e2df8] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  1 09:44:57 compute-0 nova_compute[189491]: 2025-12-01 09:44:57.593 189495 DEBUG nova.virt.libvirt.driver [None req-18ff347c-05ba-43e6-8aaf-c7477c13626e 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] [instance: 332bb5cd-96b4-43a8-9d53-1d889d5e2df8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  1 09:44:57 compute-0 nova_compute[189491]: 2025-12-01 09:44:57.598 189495 INFO nova.virt.libvirt.driver [-] [instance: 332bb5cd-96b4-43a8-9d53-1d889d5e2df8] Instance spawned successfully.#033[00m
Dec  1 09:44:57 compute-0 nova_compute[189491]: 2025-12-01 09:44:57.598 189495 DEBUG nova.virt.libvirt.driver [None req-18ff347c-05ba-43e6-8aaf-c7477c13626e 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] [instance: 332bb5cd-96b4-43a8-9d53-1d889d5e2df8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  1 09:44:57 compute-0 nova_compute[189491]: 2025-12-01 09:44:57.606 189495 DEBUG nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: 332bb5cd-96b4-43a8-9d53-1d889d5e2df8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 09:44:57 compute-0 nova_compute[189491]: 2025-12-01 09:44:57.611 189495 DEBUG nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: 332bb5cd-96b4-43a8-9d53-1d889d5e2df8] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  1 09:44:57 compute-0 nova_compute[189491]: 2025-12-01 09:44:57.629 189495 DEBUG nova.virt.libvirt.driver [None req-18ff347c-05ba-43e6-8aaf-c7477c13626e 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] [instance: 332bb5cd-96b4-43a8-9d53-1d889d5e2df8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 09:44:57 compute-0 nova_compute[189491]: 2025-12-01 09:44:57.629 189495 DEBUG nova.virt.libvirt.driver [None req-18ff347c-05ba-43e6-8aaf-c7477c13626e 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] [instance: 332bb5cd-96b4-43a8-9d53-1d889d5e2df8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 09:44:57 compute-0 nova_compute[189491]: 2025-12-01 09:44:57.629 189495 DEBUG nova.virt.libvirt.driver [None req-18ff347c-05ba-43e6-8aaf-c7477c13626e 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] [instance: 332bb5cd-96b4-43a8-9d53-1d889d5e2df8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 09:44:57 compute-0 nova_compute[189491]: 2025-12-01 09:44:57.630 189495 DEBUG nova.virt.libvirt.driver [None req-18ff347c-05ba-43e6-8aaf-c7477c13626e 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] [instance: 332bb5cd-96b4-43a8-9d53-1d889d5e2df8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 09:44:57 compute-0 nova_compute[189491]: 2025-12-01 09:44:57.630 189495 DEBUG nova.virt.libvirt.driver [None req-18ff347c-05ba-43e6-8aaf-c7477c13626e 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] [instance: 332bb5cd-96b4-43a8-9d53-1d889d5e2df8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 09:44:57 compute-0 nova_compute[189491]: 2025-12-01 09:44:57.630 189495 DEBUG nova.virt.libvirt.driver [None req-18ff347c-05ba-43e6-8aaf-c7477c13626e 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] [instance: 332bb5cd-96b4-43a8-9d53-1d889d5e2df8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 09:44:57 compute-0 nova_compute[189491]: 2025-12-01 09:44:57.634 189495 INFO nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: 332bb5cd-96b4-43a8-9d53-1d889d5e2df8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  1 09:44:57 compute-0 nova_compute[189491]: 2025-12-01 09:44:57.634 189495 DEBUG nova.virt.driver [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] Emitting event <LifecycleEvent: 1764582297.567731, 332bb5cd-96b4-43a8-9d53-1d889d5e2df8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 09:44:57 compute-0 nova_compute[189491]: 2025-12-01 09:44:57.634 189495 INFO nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: 332bb5cd-96b4-43a8-9d53-1d889d5e2df8] VM Paused (Lifecycle Event)#033[00m
Dec  1 09:44:57 compute-0 nova_compute[189491]: 2025-12-01 09:44:57.663 189495 DEBUG nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: 332bb5cd-96b4-43a8-9d53-1d889d5e2df8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 09:44:57 compute-0 nova_compute[189491]: 2025-12-01 09:44:57.670 189495 DEBUG nova.virt.driver [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] Emitting event <LifecycleEvent: 1764582297.5919378, 332bb5cd-96b4-43a8-9d53-1d889d5e2df8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 09:44:57 compute-0 nova_compute[189491]: 2025-12-01 09:44:57.670 189495 INFO nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: 332bb5cd-96b4-43a8-9d53-1d889d5e2df8] VM Resumed (Lifecycle Event)#033[00m
Dec  1 09:44:57 compute-0 nova_compute[189491]: 2025-12-01 09:44:57.696 189495 DEBUG nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: 332bb5cd-96b4-43a8-9d53-1d889d5e2df8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 09:44:57 compute-0 nova_compute[189491]: 2025-12-01 09:44:57.699 189495 INFO nova.compute.manager [None req-18ff347c-05ba-43e6-8aaf-c7477c13626e 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] [instance: 332bb5cd-96b4-43a8-9d53-1d889d5e2df8] Took 9.40 seconds to spawn the instance on the hypervisor.#033[00m
Dec  1 09:44:57 compute-0 nova_compute[189491]: 2025-12-01 09:44:57.699 189495 DEBUG nova.compute.manager [None req-18ff347c-05ba-43e6-8aaf-c7477c13626e 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] [instance: 332bb5cd-96b4-43a8-9d53-1d889d5e2df8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 09:44:57 compute-0 nova_compute[189491]: 2025-12-01 09:44:57.704 189495 DEBUG nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: 332bb5cd-96b4-43a8-9d53-1d889d5e2df8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  1 09:44:57 compute-0 nova_compute[189491]: 2025-12-01 09:44:57.724 189495 INFO nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: 332bb5cd-96b4-43a8-9d53-1d889d5e2df8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  1 09:44:57 compute-0 nova_compute[189491]: 2025-12-01 09:44:57.770 189495 INFO nova.compute.manager [None req-18ff347c-05ba-43e6-8aaf-c7477c13626e 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] [instance: 332bb5cd-96b4-43a8-9d53-1d889d5e2df8] Took 11.46 seconds to build instance.#033[00m
Dec  1 09:44:57 compute-0 nova_compute[189491]: 2025-12-01 09:44:57.790 189495 DEBUG oslo_concurrency.lockutils [None req-18ff347c-05ba-43e6-8aaf-c7477c13626e 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] Lock "332bb5cd-96b4-43a8-9d53-1d889d5e2df8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.993s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:44:57 compute-0 nova_compute[189491]: 2025-12-01 09:44:57.898 189495 DEBUG nova.network.neutron [req-45b3fb72-d141-4d1b-9c91-f2bdd6cc606f req-46c4b68e-60cf-4285-8cd6-c61bca5143f4 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 332bb5cd-96b4-43a8-9d53-1d889d5e2df8] Updated VIF entry in instance network info cache for port 39057be4-bfdf-4611-a03e-05cf570b079d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  1 09:44:57 compute-0 nova_compute[189491]: 2025-12-01 09:44:57.898 189495 DEBUG nova.network.neutron [req-45b3fb72-d141-4d1b-9c91-f2bdd6cc606f req-46c4b68e-60cf-4285-8cd6-c61bca5143f4 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 332bb5cd-96b4-43a8-9d53-1d889d5e2df8] Updating instance_info_cache with network_info: [{"id": "39057be4-bfdf-4611-a03e-05cf570b079d", "address": "fa:16:3e:64:29:3b", "network": {"id": "00607b38-c4af-4481-a204-66b72a06ac7e", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1266600030-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "104bf2f5f6f1439e9fc460940d474ff7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap39057be4-bf", "ovs_interfaceid": "39057be4-bfdf-4611-a03e-05cf570b079d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 09:44:57 compute-0 nova_compute[189491]: 2025-12-01 09:44:57.914 189495 DEBUG oslo_concurrency.lockutils [req-45b3fb72-d141-4d1b-9c91-f2bdd6cc606f req-46c4b68e-60cf-4285-8cd6-c61bca5143f4 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Releasing lock "refresh_cache-332bb5cd-96b4-43a8-9d53-1d889d5e2df8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 09:44:58 compute-0 podman[253683]: 2025-12-01 09:44:58.113430733 +0000 UTC m=+0.088030241 container create b2a1c925f1e5819afb9501291822beae64e0937ce7bfa9888061dab1aa26f486 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-00607b38-c4af-4481-a204-66b72a06ac7e, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec  1 09:44:58 compute-0 systemd[1]: Started libpod-conmon-b2a1c925f1e5819afb9501291822beae64e0937ce7bfa9888061dab1aa26f486.scope.
Dec  1 09:44:58 compute-0 podman[253683]: 2025-12-01 09:44:58.054293789 +0000 UTC m=+0.028893327 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  1 09:44:58 compute-0 nova_compute[189491]: 2025-12-01 09:44:58.153 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:44:58 compute-0 systemd[1]: Started libcrun container.
Dec  1 09:44:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab4e8ad1e5cac440176c09810fc6a670a077eca221bd6c9690d7980a522f9ec8/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  1 09:44:58 compute-0 podman[253683]: 2025-12-01 09:44:58.221781378 +0000 UTC m=+0.196380906 container init b2a1c925f1e5819afb9501291822beae64e0937ce7bfa9888061dab1aa26f486 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-00607b38-c4af-4481-a204-66b72a06ac7e, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 09:44:58 compute-0 podman[253683]: 2025-12-01 09:44:58.229877286 +0000 UTC m=+0.204476794 container start b2a1c925f1e5819afb9501291822beae64e0937ce7bfa9888061dab1aa26f486 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-00607b38-c4af-4481-a204-66b72a06ac7e, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 09:44:58 compute-0 neutron-haproxy-ovnmeta-00607b38-c4af-4481-a204-66b72a06ac7e[253697]: [NOTICE]   (253702) : New worker (253704) forked
Dec  1 09:44:58 compute-0 neutron-haproxy-ovnmeta-00607b38-c4af-4481-a204-66b72a06ac7e[253697]: [NOTICE]   (253702) : Loading success.
Dec  1 09:44:59 compute-0 nova_compute[189491]: 2025-12-01 09:44:59.704 189495 DEBUG nova.compute.manager [req-1f840fdc-b9da-4b4b-84e2-db5f2b785017 req-09418d94-c92d-4293-a421-5d9be6bfcf03 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 332bb5cd-96b4-43a8-9d53-1d889d5e2df8] Received event network-vif-plugged-39057be4-bfdf-4611-a03e-05cf570b079d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 09:44:59 compute-0 nova_compute[189491]: 2025-12-01 09:44:59.704 189495 DEBUG oslo_concurrency.lockutils [req-1f840fdc-b9da-4b4b-84e2-db5f2b785017 req-09418d94-c92d-4293-a421-5d9be6bfcf03 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Acquiring lock "332bb5cd-96b4-43a8-9d53-1d889d5e2df8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:44:59 compute-0 nova_compute[189491]: 2025-12-01 09:44:59.704 189495 DEBUG oslo_concurrency.lockutils [req-1f840fdc-b9da-4b4b-84e2-db5f2b785017 req-09418d94-c92d-4293-a421-5d9be6bfcf03 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Lock "332bb5cd-96b4-43a8-9d53-1d889d5e2df8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:44:59 compute-0 nova_compute[189491]: 2025-12-01 09:44:59.704 189495 DEBUG oslo_concurrency.lockutils [req-1f840fdc-b9da-4b4b-84e2-db5f2b785017 req-09418d94-c92d-4293-a421-5d9be6bfcf03 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Lock "332bb5cd-96b4-43a8-9d53-1d889d5e2df8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:44:59 compute-0 nova_compute[189491]: 2025-12-01 09:44:59.705 189495 DEBUG nova.compute.manager [req-1f840fdc-b9da-4b4b-84e2-db5f2b785017 req-09418d94-c92d-4293-a421-5d9be6bfcf03 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 332bb5cd-96b4-43a8-9d53-1d889d5e2df8] No waiting events found dispatching network-vif-plugged-39057be4-bfdf-4611-a03e-05cf570b079d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 09:44:59 compute-0 nova_compute[189491]: 2025-12-01 09:44:59.705 189495 WARNING nova.compute.manager [req-1f840fdc-b9da-4b4b-84e2-db5f2b785017 req-09418d94-c92d-4293-a421-5d9be6bfcf03 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 332bb5cd-96b4-43a8-9d53-1d889d5e2df8] Received unexpected event network-vif-plugged-39057be4-bfdf-4611-a03e-05cf570b079d for instance with vm_state active and task_state None.#033[00m
Dec  1 09:44:59 compute-0 podman[203700]: time="2025-12-01T09:44:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 09:44:59 compute-0 podman[203700]: @ - - [01/Dec/2025:09:44:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 31990 "" "Go-http-client/1.1"
Dec  1 09:44:59 compute-0 podman[203700]: @ - - [01/Dec/2025:09:44:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5728 "" "Go-http-client/1.1"
Dec  1 09:45:00 compute-0 nova_compute[189491]: 2025-12-01 09:45:00.272 189495 DEBUG oslo_concurrency.lockutils [None req-2f487f36-1618-4066-8c19-ad1b32e4c962 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Acquiring lock "dc0d510c-4baf-4bcb-ab4f-de6ee48849c0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:45:00 compute-0 nova_compute[189491]: 2025-12-01 09:45:00.273 189495 DEBUG oslo_concurrency.lockutils [None req-2f487f36-1618-4066-8c19-ad1b32e4c962 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Lock "dc0d510c-4baf-4bcb-ab4f-de6ee48849c0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:45:00 compute-0 nova_compute[189491]: 2025-12-01 09:45:00.283 189495 DEBUG nova.compute.manager [req-f3c6639c-7e93-4b94-a528-f86848966346 req-2bb600f8-037a-46bf-92fe-3026e4c06d00 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 332bb5cd-96b4-43a8-9d53-1d889d5e2df8] Received event network-changed-39057be4-bfdf-4611-a03e-05cf570b079d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 09:45:00 compute-0 nova_compute[189491]: 2025-12-01 09:45:00.283 189495 DEBUG nova.compute.manager [req-f3c6639c-7e93-4b94-a528-f86848966346 req-2bb600f8-037a-46bf-92fe-3026e4c06d00 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 332bb5cd-96b4-43a8-9d53-1d889d5e2df8] Refreshing instance network info cache due to event network-changed-39057be4-bfdf-4611-a03e-05cf570b079d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  1 09:45:00 compute-0 nova_compute[189491]: 2025-12-01 09:45:00.284 189495 DEBUG oslo_concurrency.lockutils [req-f3c6639c-7e93-4b94-a528-f86848966346 req-2bb600f8-037a-46bf-92fe-3026e4c06d00 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Acquiring lock "refresh_cache-332bb5cd-96b4-43a8-9d53-1d889d5e2df8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 09:45:00 compute-0 nova_compute[189491]: 2025-12-01 09:45:00.284 189495 DEBUG oslo_concurrency.lockutils [req-f3c6639c-7e93-4b94-a528-f86848966346 req-2bb600f8-037a-46bf-92fe-3026e4c06d00 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Acquired lock "refresh_cache-332bb5cd-96b4-43a8-9d53-1d889d5e2df8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 09:45:00 compute-0 nova_compute[189491]: 2025-12-01 09:45:00.285 189495 DEBUG nova.network.neutron [req-f3c6639c-7e93-4b94-a528-f86848966346 req-2bb600f8-037a-46bf-92fe-3026e4c06d00 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 332bb5cd-96b4-43a8-9d53-1d889d5e2df8] Refreshing network info cache for port 39057be4-bfdf-4611-a03e-05cf570b079d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  1 09:45:00 compute-0 nova_compute[189491]: 2025-12-01 09:45:00.330 189495 DEBUG nova.compute.manager [None req-2f487f36-1618-4066-8c19-ad1b32e4c962 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] [instance: dc0d510c-4baf-4bcb-ab4f-de6ee48849c0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  1 09:45:00 compute-0 nova_compute[189491]: 2025-12-01 09:45:00.471 189495 DEBUG oslo_concurrency.lockutils [None req-2f487f36-1618-4066-8c19-ad1b32e4c962 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:45:00 compute-0 nova_compute[189491]: 2025-12-01 09:45:00.472 189495 DEBUG oslo_concurrency.lockutils [None req-2f487f36-1618-4066-8c19-ad1b32e4c962 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:45:00 compute-0 nova_compute[189491]: 2025-12-01 09:45:00.481 189495 DEBUG nova.virt.hardware [None req-2f487f36-1618-4066-8c19-ad1b32e4c962 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  1 09:45:00 compute-0 nova_compute[189491]: 2025-12-01 09:45:00.482 189495 INFO nova.compute.claims [None req-2f487f36-1618-4066-8c19-ad1b32e4c962 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] [instance: dc0d510c-4baf-4bcb-ab4f-de6ee48849c0] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  1 09:45:00 compute-0 nova_compute[189491]: 2025-12-01 09:45:00.679 189495 DEBUG nova.compute.provider_tree [None req-2f487f36-1618-4066-8c19-ad1b32e4c962 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Inventory has not changed in ProviderTree for provider: 143c7fe7-af1f-477a-978c-6a994d785d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 09:45:00 compute-0 nova_compute[189491]: 2025-12-01 09:45:00.699 189495 DEBUG nova.scheduler.client.report [None req-2f487f36-1618-4066-8c19-ad1b32e4c962 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Inventory has not changed for provider 143c7fe7-af1f-477a-978c-6a994d785d98 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 09:45:00 compute-0 podman[253715]: 2025-12-01 09:45:00.719778457 +0000 UTC m=+0.087838486 container health_status e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  1 09:45:00 compute-0 nova_compute[189491]: 2025-12-01 09:45:00.728 189495 DEBUG oslo_concurrency.lockutils [None req-2f487f36-1618-4066-8c19-ad1b32e4c962 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.256s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:45:00 compute-0 nova_compute[189491]: 2025-12-01 09:45:00.730 189495 DEBUG nova.compute.manager [None req-2f487f36-1618-4066-8c19-ad1b32e4c962 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] [instance: dc0d510c-4baf-4bcb-ab4f-de6ee48849c0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  1 09:45:00 compute-0 nova_compute[189491]: 2025-12-01 09:45:00.777 189495 DEBUG nova.compute.manager [None req-2f487f36-1618-4066-8c19-ad1b32e4c962 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] [instance: dc0d510c-4baf-4bcb-ab4f-de6ee48849c0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  1 09:45:00 compute-0 nova_compute[189491]: 2025-12-01 09:45:00.778 189495 DEBUG nova.network.neutron [None req-2f487f36-1618-4066-8c19-ad1b32e4c962 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] [instance: dc0d510c-4baf-4bcb-ab4f-de6ee48849c0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  1 09:45:00 compute-0 nova_compute[189491]: 2025-12-01 09:45:00.802 189495 INFO nova.virt.libvirt.driver [None req-2f487f36-1618-4066-8c19-ad1b32e4c962 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] [instance: dc0d510c-4baf-4bcb-ab4f-de6ee48849c0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  1 09:45:00 compute-0 nova_compute[189491]: 2025-12-01 09:45:00.824 189495 DEBUG nova.compute.manager [None req-2f487f36-1618-4066-8c19-ad1b32e4c962 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] [instance: dc0d510c-4baf-4bcb-ab4f-de6ee48849c0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  1 09:45:00 compute-0 nova_compute[189491]: 2025-12-01 09:45:00.878 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:45:00 compute-0 nova_compute[189491]: 2025-12-01 09:45:00.940 189495 DEBUG nova.compute.manager [None req-2f487f36-1618-4066-8c19-ad1b32e4c962 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] [instance: dc0d510c-4baf-4bcb-ab4f-de6ee48849c0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  1 09:45:00 compute-0 nova_compute[189491]: 2025-12-01 09:45:00.943 189495 DEBUG nova.virt.libvirt.driver [None req-2f487f36-1618-4066-8c19-ad1b32e4c962 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] [instance: dc0d510c-4baf-4bcb-ab4f-de6ee48849c0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  1 09:45:00 compute-0 nova_compute[189491]: 2025-12-01 09:45:00.944 189495 INFO nova.virt.libvirt.driver [None req-2f487f36-1618-4066-8c19-ad1b32e4c962 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] [instance: dc0d510c-4baf-4bcb-ab4f-de6ee48849c0] Creating image(s)#033[00m
Dec  1 09:45:00 compute-0 nova_compute[189491]: 2025-12-01 09:45:00.945 189495 DEBUG oslo_concurrency.lockutils [None req-2f487f36-1618-4066-8c19-ad1b32e4c962 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Acquiring lock "/var/lib/nova/instances/dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:45:00 compute-0 nova_compute[189491]: 2025-12-01 09:45:00.945 189495 DEBUG oslo_concurrency.lockutils [None req-2f487f36-1618-4066-8c19-ad1b32e4c962 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Lock "/var/lib/nova/instances/dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:45:00 compute-0 nova_compute[189491]: 2025-12-01 09:45:00.947 189495 DEBUG oslo_concurrency.lockutils [None req-2f487f36-1618-4066-8c19-ad1b32e4c962 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Lock "/var/lib/nova/instances/dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:45:00 compute-0 nova_compute[189491]: 2025-12-01 09:45:00.947 189495 DEBUG oslo_concurrency.lockutils [None req-2f487f36-1618-4066-8c19-ad1b32e4c962 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Acquiring lock "8b917e1e1f61d3c861f59bffbbb40426a7633e75" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:45:00 compute-0 nova_compute[189491]: 2025-12-01 09:45:00.948 189495 DEBUG oslo_concurrency.lockutils [None req-2f487f36-1618-4066-8c19-ad1b32e4c962 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Lock "8b917e1e1f61d3c861f59bffbbb40426a7633e75" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:45:01 compute-0 nova_compute[189491]: 2025-12-01 09:45:01.050 189495 DEBUG nova.policy [None req-2f487f36-1618-4066-8c19-ad1b32e4c962 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c54f3a4a232b4a739be88e97f2094d4f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6d5294cc5ac64b22a4a0f770b8d8bc61', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  1 09:45:01 compute-0 openstack_network_exporter[205866]: ERROR   09:45:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:45:01 compute-0 openstack_network_exporter[205866]: ERROR   09:45:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:45:01 compute-0 openstack_network_exporter[205866]: ERROR   09:45:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 09:45:01 compute-0 openstack_network_exporter[205866]: ERROR   09:45:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 09:45:01 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:45:01 compute-0 openstack_network_exporter[205866]: ERROR   09:45:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 09:45:01 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:45:01 compute-0 nova_compute[189491]: 2025-12-01 09:45:01.606 189495 DEBUG oslo_concurrency.lockutils [None req-66e744e8-31c4-4cb9-a18e-90560c02e508 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] Acquiring lock "332bb5cd-96b4-43a8-9d53-1d889d5e2df8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:45:01 compute-0 nova_compute[189491]: 2025-12-01 09:45:01.606 189495 DEBUG oslo_concurrency.lockutils [None req-66e744e8-31c4-4cb9-a18e-90560c02e508 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] Lock "332bb5cd-96b4-43a8-9d53-1d889d5e2df8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:45:01 compute-0 nova_compute[189491]: 2025-12-01 09:45:01.608 189495 DEBUG oslo_concurrency.lockutils [None req-66e744e8-31c4-4cb9-a18e-90560c02e508 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] Acquiring lock "332bb5cd-96b4-43a8-9d53-1d889d5e2df8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:45:01 compute-0 nova_compute[189491]: 2025-12-01 09:45:01.608 189495 DEBUG oslo_concurrency.lockutils [None req-66e744e8-31c4-4cb9-a18e-90560c02e508 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] Lock "332bb5cd-96b4-43a8-9d53-1d889d5e2df8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:45:01 compute-0 nova_compute[189491]: 2025-12-01 09:45:01.608 189495 DEBUG oslo_concurrency.lockutils [None req-66e744e8-31c4-4cb9-a18e-90560c02e508 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] Lock "332bb5cd-96b4-43a8-9d53-1d889d5e2df8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:45:01 compute-0 nova_compute[189491]: 2025-12-01 09:45:01.611 189495 INFO nova.compute.manager [None req-66e744e8-31c4-4cb9-a18e-90560c02e508 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] [instance: 332bb5cd-96b4-43a8-9d53-1d889d5e2df8] Terminating instance#033[00m
Dec  1 09:45:01 compute-0 nova_compute[189491]: 2025-12-01 09:45:01.612 189495 DEBUG nova.compute.manager [None req-66e744e8-31c4-4cb9-a18e-90560c02e508 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] [instance: 332bb5cd-96b4-43a8-9d53-1d889d5e2df8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  1 09:45:01 compute-0 kernel: tap39057be4-bf (unregistering): left promiscuous mode
Dec  1 09:45:01 compute-0 NetworkManager[56318]: <info>  [1764582301.6535] device (tap39057be4-bf): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  1 09:45:01 compute-0 ovn_controller[97794]: 2025-12-01T09:45:01Z|00115|binding|INFO|Releasing lport 39057be4-bfdf-4611-a03e-05cf570b079d from this chassis (sb_readonly=0)
Dec  1 09:45:01 compute-0 nova_compute[189491]: 2025-12-01 09:45:01.668 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:45:01 compute-0 ovn_controller[97794]: 2025-12-01T09:45:01Z|00116|binding|INFO|Setting lport 39057be4-bfdf-4611-a03e-05cf570b079d down in Southbound
Dec  1 09:45:01 compute-0 ovn_controller[97794]: 2025-12-01T09:45:01Z|00117|binding|INFO|Removing iface tap39057be4-bf ovn-installed in OVS
Dec  1 09:45:01 compute-0 nova_compute[189491]: 2025-12-01 09:45:01.680 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:45:01 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:45:01.686 106659 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:64:29:3b 10.100.0.9'], port_security=['fa:16:3e:64:29:3b 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '332bb5cd-96b4-43a8-9d53-1d889d5e2df8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-00607b38-c4af-4481-a204-66b72a06ac7e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '104bf2f5f6f1439e9fc460940d474ff7', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3af9e5be-2f19-4cbe-93f7-131a0ec5f44d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.234'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3c41e6de-1e1f-49da-9091-402137f073fd, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff7f12c3670>], logical_port=39057be4-bfdf-4611-a03e-05cf570b079d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff7f12c3670>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 09:45:01 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:45:01.688 106659 INFO neutron.agent.ovn.metadata.agent [-] Port 39057be4-bfdf-4611-a03e-05cf570b079d in datapath 00607b38-c4af-4481-a204-66b72a06ac7e unbound from our chassis#033[00m
Dec  1 09:45:01 compute-0 nova_compute[189491]: 2025-12-01 09:45:01.689 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:45:01 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:45:01.690 106659 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 00607b38-c4af-4481-a204-66b72a06ac7e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  1 09:45:01 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:45:01.692 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[2625f6dc-9544-418c-ac52-acec34d46190]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:45:01 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:45:01.692 106659 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-00607b38-c4af-4481-a204-66b72a06ac7e namespace which is not needed anymore#033[00m
Dec  1 09:45:01 compute-0 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d0000000a.scope: Deactivated successfully.
Dec  1 09:45:01 compute-0 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d0000000a.scope: Consumed 4.537s CPU time.
Dec  1 09:45:01 compute-0 systemd-machined[155812]: Machine qemu-11-instance-0000000a terminated.
Dec  1 09:45:01 compute-0 podman[253745]: 2025-12-01 09:45:01.719394806 +0000 UTC m=+0.092517690 container health_status f2d8639519b361376780abb83cb5a5e341c4f412720fb5852b2a4d261a69c359 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, io.openshift.tags=base rhel9, release-0.7.12=, io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., container_name=kepler, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, managed_by=edpm_ansible, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0)
Dec  1 09:45:01 compute-0 podman[253744]: 2025-12-01 09:45:01.739776764 +0000 UTC m=+0.117053880 container health_status dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  1 09:45:01 compute-0 nova_compute[189491]: 2025-12-01 09:45:01.848 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:45:01 compute-0 nova_compute[189491]: 2025-12-01 09:45:01.856 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:45:01 compute-0 neutron-haproxy-ovnmeta-00607b38-c4af-4481-a204-66b72a06ac7e[253697]: [NOTICE]   (253702) : haproxy version is 2.8.14-c23fe91
Dec  1 09:45:01 compute-0 neutron-haproxy-ovnmeta-00607b38-c4af-4481-a204-66b72a06ac7e[253697]: [NOTICE]   (253702) : path to executable is /usr/sbin/haproxy
Dec  1 09:45:01 compute-0 neutron-haproxy-ovnmeta-00607b38-c4af-4481-a204-66b72a06ac7e[253697]: [WARNING]  (253702) : Exiting Master process...
Dec  1 09:45:01 compute-0 neutron-haproxy-ovnmeta-00607b38-c4af-4481-a204-66b72a06ac7e[253697]: [WARNING]  (253702) : Exiting Master process...
Dec  1 09:45:01 compute-0 neutron-haproxy-ovnmeta-00607b38-c4af-4481-a204-66b72a06ac7e[253697]: [ALERT]    (253702) : Current worker (253704) exited with code 143 (Terminated)
Dec  1 09:45:01 compute-0 neutron-haproxy-ovnmeta-00607b38-c4af-4481-a204-66b72a06ac7e[253697]: [WARNING]  (253702) : All workers exited. Exiting... (0)
Dec  1 09:45:01 compute-0 systemd[1]: libpod-b2a1c925f1e5819afb9501291822beae64e0937ce7bfa9888061dab1aa26f486.scope: Deactivated successfully.
Dec  1 09:45:01 compute-0 podman[253807]: 2025-12-01 09:45:01.872407693 +0000 UTC m=+0.059038393 container died b2a1c925f1e5819afb9501291822beae64e0937ce7bfa9888061dab1aa26f486 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-00607b38-c4af-4481-a204-66b72a06ac7e, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec  1 09:45:01 compute-0 nova_compute[189491]: 2025-12-01 09:45:01.897 189495 INFO nova.virt.libvirt.driver [-] [instance: 332bb5cd-96b4-43a8-9d53-1d889d5e2df8] Instance destroyed successfully.#033[00m
Dec  1 09:45:01 compute-0 nova_compute[189491]: 2025-12-01 09:45:01.898 189495 DEBUG nova.objects.instance [None req-66e744e8-31c4-4cb9-a18e-90560c02e508 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] Lazy-loading 'resources' on Instance uuid 332bb5cd-96b4-43a8-9d53-1d889d5e2df8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 09:45:01 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b2a1c925f1e5819afb9501291822beae64e0937ce7bfa9888061dab1aa26f486-userdata-shm.mount: Deactivated successfully.
Dec  1 09:45:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-ab4e8ad1e5cac440176c09810fc6a670a077eca221bd6c9690d7980a522f9ec8-merged.mount: Deactivated successfully.
Dec  1 09:45:01 compute-0 podman[253807]: 2025-12-01 09:45:01.942002902 +0000 UTC m=+0.128633602 container cleanup b2a1c925f1e5819afb9501291822beae64e0937ce7bfa9888061dab1aa26f486 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-00607b38-c4af-4481-a204-66b72a06ac7e, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec  1 09:45:01 compute-0 systemd[1]: libpod-conmon-b2a1c925f1e5819afb9501291822beae64e0937ce7bfa9888061dab1aa26f486.scope: Deactivated successfully.
Dec  1 09:45:01 compute-0 nova_compute[189491]: 2025-12-01 09:45:01.986 189495 DEBUG nova.virt.libvirt.vif [None req-66e744e8-31c4-4cb9-a18e-90560c02e508 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-01T09:44:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-1882758829',display_name='tempest-ServersTestManualDisk-server-1882758829',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-1882758829',id=10,image_ref='7ddeffd1-d06f-4a46-9e41-114974daa90e',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEBun4wV8PqH+paF8mL/kjAoF3csnHUjxB9+OJjPrJ9zvgm5mf5drjzi5QsaL5k8m7FaaWkmzV9DwtcJrOsdFYWS8HcOG+BcZQThXRdW9XzhSoxmfPyEiSufuVm2QUPnEQ==',key_name='tempest-keypair-1598602579',keypairs=<?>,launch_index=0,launched_at=2025-12-01T09:44:57Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='104bf2f5f6f1439e9fc460940d474ff7',ramdisk_id='',reservation_id='r-okhxk7ta',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7ddeffd1-d06f-4a46-9e41-114974daa90e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestManualDisk-192501260',owner_user_name='tempest-ServersTestManualDisk-192501260-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-01T09:44:57Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='22ae22ecd4ce4774b704b3aa723962b8',uuid=332bb5cd-96b4-43a8-9d53-1d889d5e2df8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "39057be4-bfdf-4611-a03e-05cf570b079d", "address": "fa:16:3e:64:29:3b", "network": {"id": "00607b38-c4af-4481-a204-66b72a06ac7e", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1266600030-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "104bf2f5f6f1439e9fc460940d474ff7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap39057be4-bf", "ovs_interfaceid": "39057be4-bfdf-4611-a03e-05cf570b079d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  1 09:45:01 compute-0 nova_compute[189491]: 2025-12-01 09:45:01.989 189495 DEBUG nova.network.os_vif_util [None req-66e744e8-31c4-4cb9-a18e-90560c02e508 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] Converting VIF {"id": "39057be4-bfdf-4611-a03e-05cf570b079d", "address": "fa:16:3e:64:29:3b", "network": {"id": "00607b38-c4af-4481-a204-66b72a06ac7e", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1266600030-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "104bf2f5f6f1439e9fc460940d474ff7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap39057be4-bf", "ovs_interfaceid": "39057be4-bfdf-4611-a03e-05cf570b079d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 09:45:01 compute-0 nova_compute[189491]: 2025-12-01 09:45:01.990 189495 DEBUG nova.network.os_vif_util [None req-66e744e8-31c4-4cb9-a18e-90560c02e508 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:64:29:3b,bridge_name='br-int',has_traffic_filtering=True,id=39057be4-bfdf-4611-a03e-05cf570b079d,network=Network(00607b38-c4af-4481-a204-66b72a06ac7e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap39057be4-bf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 09:45:01 compute-0 nova_compute[189491]: 2025-12-01 09:45:01.991 189495 DEBUG os_vif [None req-66e744e8-31c4-4cb9-a18e-90560c02e508 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:64:29:3b,bridge_name='br-int',has_traffic_filtering=True,id=39057be4-bfdf-4611-a03e-05cf570b079d,network=Network(00607b38-c4af-4481-a204-66b72a06ac7e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap39057be4-bf') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  1 09:45:01 compute-0 nova_compute[189491]: 2025-12-01 09:45:01.993 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:45:01 compute-0 nova_compute[189491]: 2025-12-01 09:45:01.993 189495 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap39057be4-bf, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:45:01 compute-0 nova_compute[189491]: 2025-12-01 09:45:01.998 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  1 09:45:02 compute-0 nova_compute[189491]: 2025-12-01 09:45:02.001 189495 INFO os_vif [None req-66e744e8-31c4-4cb9-a18e-90560c02e508 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:64:29:3b,bridge_name='br-int',has_traffic_filtering=True,id=39057be4-bfdf-4611-a03e-05cf570b079d,network=Network(00607b38-c4af-4481-a204-66b72a06ac7e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap39057be4-bf')#033[00m
Dec  1 09:45:02 compute-0 nova_compute[189491]: 2025-12-01 09:45:02.002 189495 INFO nova.virt.libvirt.driver [None req-66e744e8-31c4-4cb9-a18e-90560c02e508 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] [instance: 332bb5cd-96b4-43a8-9d53-1d889d5e2df8] Deleting instance files /var/lib/nova/instances/332bb5cd-96b4-43a8-9d53-1d889d5e2df8_del#033[00m
Dec  1 09:45:02 compute-0 nova_compute[189491]: 2025-12-01 09:45:02.003 189495 INFO nova.virt.libvirt.driver [None req-66e744e8-31c4-4cb9-a18e-90560c02e508 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] [instance: 332bb5cd-96b4-43a8-9d53-1d889d5e2df8] Deletion of /var/lib/nova/instances/332bb5cd-96b4-43a8-9d53-1d889d5e2df8_del complete#033[00m
Dec  1 09:45:02 compute-0 podman[253851]: 2025-12-01 09:45:02.022413906 +0000 UTC m=+0.052966965 container remove b2a1c925f1e5819afb9501291822beae64e0937ce7bfa9888061dab1aa26f486 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-00607b38-c4af-4481-a204-66b72a06ac7e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS)
Dec  1 09:45:02 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:45:02.030 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[2bff7f84-99a3-420f-aac2-a90a6c6e3149]: (4, ('Mon Dec  1 09:45:01 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-00607b38-c4af-4481-a204-66b72a06ac7e (b2a1c925f1e5819afb9501291822beae64e0937ce7bfa9888061dab1aa26f486)\nb2a1c925f1e5819afb9501291822beae64e0937ce7bfa9888061dab1aa26f486\nMon Dec  1 09:45:01 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-00607b38-c4af-4481-a204-66b72a06ac7e (b2a1c925f1e5819afb9501291822beae64e0937ce7bfa9888061dab1aa26f486)\nb2a1c925f1e5819afb9501291822beae64e0937ce7bfa9888061dab1aa26f486\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:45:02 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:45:02.033 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[5bd926a4-4cdd-4541-a6ef-29b8b9639134]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:45:02 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:45:02.034 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap00607b38-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:45:02 compute-0 kernel: tap00607b38-c0: left promiscuous mode
Dec  1 09:45:02 compute-0 nova_compute[189491]: 2025-12-01 09:45:02.044 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:45:02 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:45:02.048 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[d7af4ffc-b979-4793-9603-d4234eb9e0d8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:45:02 compute-0 nova_compute[189491]: 2025-12-01 09:45:02.059 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:45:02 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:45:02.073 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[b83486ba-fadb-43c0-a4bb-39233083df0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:45:02 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:45:02.075 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[ef2cca37-e49f-4c3d-aa86-862383ab1301]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:45:02 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:45:02.091 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[6b52e343-d52a-4640-b097-aa84ebb87ba4]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 553693, 'reachable_time': 43499, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 253866, 'error': None, 'target': 'ovnmeta-00607b38-c4af-4481-a204-66b72a06ac7e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:45:02 compute-0 systemd[1]: run-netns-ovnmeta\x2d00607b38\x2dc4af\x2d4481\x2da204\x2d66b72a06ac7e.mount: Deactivated successfully.
Dec  1 09:45:02 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:45:02.098 106797 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-00607b38-c4af-4481-a204-66b72a06ac7e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  1 09:45:02 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:45:02.098 106797 DEBUG oslo.privsep.daemon [-] privsep: reply[aa26894d-2a21-4c1b-96e6-2af429cc9f1f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:45:02 compute-0 nova_compute[189491]: 2025-12-01 09:45:02.112 189495 INFO nova.compute.manager [None req-66e744e8-31c4-4cb9-a18e-90560c02e508 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] [instance: 332bb5cd-96b4-43a8-9d53-1d889d5e2df8] Took 0.50 seconds to destroy the instance on the hypervisor.#033[00m
Dec  1 09:45:02 compute-0 nova_compute[189491]: 2025-12-01 09:45:02.113 189495 DEBUG oslo.service.loopingcall [None req-66e744e8-31c4-4cb9-a18e-90560c02e508 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  1 09:45:02 compute-0 nova_compute[189491]: 2025-12-01 09:45:02.113 189495 DEBUG nova.compute.manager [-] [instance: 332bb5cd-96b4-43a8-9d53-1d889d5e2df8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  1 09:45:02 compute-0 nova_compute[189491]: 2025-12-01 09:45:02.113 189495 DEBUG nova.network.neutron [-] [instance: 332bb5cd-96b4-43a8-9d53-1d889d5e2df8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  1 09:45:02 compute-0 nova_compute[189491]: 2025-12-01 09:45:02.318 189495 DEBUG nova.network.neutron [None req-2f487f36-1618-4066-8c19-ad1b32e4c962 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] [instance: dc0d510c-4baf-4bcb-ab4f-de6ee48849c0] Successfully created port: e1536dee-e9fa-499f-9e7a-2b2a0ecce586 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  1 09:45:02 compute-0 ovn_controller[97794]: 2025-12-01T09:45:02Z|00016|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:06:a3:58 10.100.0.10
Dec  1 09:45:02 compute-0 ovn_controller[97794]: 2025-12-01T09:45:02Z|00017|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:06:a3:58 10.100.0.10
Dec  1 09:45:02 compute-0 nova_compute[189491]: 2025-12-01 09:45:02.646 189495 DEBUG nova.compute.manager [req-8d191675-d565-46eb-91c5-c2b9ff8b4f00 req-158e4f64-fce3-42d7-9f73-674c2ea09eb6 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 332bb5cd-96b4-43a8-9d53-1d889d5e2df8] Received event network-vif-unplugged-39057be4-bfdf-4611-a03e-05cf570b079d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 09:45:02 compute-0 nova_compute[189491]: 2025-12-01 09:45:02.647 189495 DEBUG oslo_concurrency.lockutils [req-8d191675-d565-46eb-91c5-c2b9ff8b4f00 req-158e4f64-fce3-42d7-9f73-674c2ea09eb6 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Acquiring lock "332bb5cd-96b4-43a8-9d53-1d889d5e2df8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:45:02 compute-0 nova_compute[189491]: 2025-12-01 09:45:02.647 189495 DEBUG oslo_concurrency.lockutils [req-8d191675-d565-46eb-91c5-c2b9ff8b4f00 req-158e4f64-fce3-42d7-9f73-674c2ea09eb6 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Lock "332bb5cd-96b4-43a8-9d53-1d889d5e2df8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:45:02 compute-0 nova_compute[189491]: 2025-12-01 09:45:02.647 189495 DEBUG oslo_concurrency.lockutils [req-8d191675-d565-46eb-91c5-c2b9ff8b4f00 req-158e4f64-fce3-42d7-9f73-674c2ea09eb6 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Lock "332bb5cd-96b4-43a8-9d53-1d889d5e2df8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:45:02 compute-0 nova_compute[189491]: 2025-12-01 09:45:02.647 189495 DEBUG nova.compute.manager [req-8d191675-d565-46eb-91c5-c2b9ff8b4f00 req-158e4f64-fce3-42d7-9f73-674c2ea09eb6 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 332bb5cd-96b4-43a8-9d53-1d889d5e2df8] No waiting events found dispatching network-vif-unplugged-39057be4-bfdf-4611-a03e-05cf570b079d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 09:45:02 compute-0 nova_compute[189491]: 2025-12-01 09:45:02.648 189495 DEBUG nova.compute.manager [req-8d191675-d565-46eb-91c5-c2b9ff8b4f00 req-158e4f64-fce3-42d7-9f73-674c2ea09eb6 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 332bb5cd-96b4-43a8-9d53-1d889d5e2df8] Received event network-vif-unplugged-39057be4-bfdf-4611-a03e-05cf570b079d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  1 09:45:02 compute-0 nova_compute[189491]: 2025-12-01 09:45:02.860 189495 DEBUG oslo_concurrency.processutils [None req-2f487f36-1618-4066-8c19-ad1b32e4c962 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/8b917e1e1f61d3c861f59bffbbb40426a7633e75.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:45:02 compute-0 nova_compute[189491]: 2025-12-01 09:45:02.925 189495 DEBUG nova.network.neutron [req-f3c6639c-7e93-4b94-a528-f86848966346 req-2bb600f8-037a-46bf-92fe-3026e4c06d00 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 332bb5cd-96b4-43a8-9d53-1d889d5e2df8] Updated VIF entry in instance network info cache for port 39057be4-bfdf-4611-a03e-05cf570b079d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  1 09:45:02 compute-0 nova_compute[189491]: 2025-12-01 09:45:02.926 189495 DEBUG nova.network.neutron [req-f3c6639c-7e93-4b94-a528-f86848966346 req-2bb600f8-037a-46bf-92fe-3026e4c06d00 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 332bb5cd-96b4-43a8-9d53-1d889d5e2df8] Updating instance_info_cache with network_info: [{"id": "39057be4-bfdf-4611-a03e-05cf570b079d", "address": "fa:16:3e:64:29:3b", "network": {"id": "00607b38-c4af-4481-a204-66b72a06ac7e", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1266600030-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "104bf2f5f6f1439e9fc460940d474ff7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap39057be4-bf", "ovs_interfaceid": "39057be4-bfdf-4611-a03e-05cf570b079d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 09:45:02 compute-0 nova_compute[189491]: 2025-12-01 09:45:02.939 189495 DEBUG oslo_concurrency.processutils [None req-2f487f36-1618-4066-8c19-ad1b32e4c962 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/8b917e1e1f61d3c861f59bffbbb40426a7633e75.part --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:45:02 compute-0 nova_compute[189491]: 2025-12-01 09:45:02.940 189495 DEBUG nova.virt.images [None req-2f487f36-1618-4066-8c19-ad1b32e4c962 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] 280f4e4d-4a12-4164-a687-6106a9afc7fe was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Dec  1 09:45:02 compute-0 nova_compute[189491]: 2025-12-01 09:45:02.941 189495 DEBUG nova.privsep.utils [None req-2f487f36-1618-4066-8c19-ad1b32e4c962 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Dec  1 09:45:02 compute-0 nova_compute[189491]: 2025-12-01 09:45:02.941 189495 DEBUG oslo_concurrency.processutils [None req-2f487f36-1618-4066-8c19-ad1b32e4c962 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/8b917e1e1f61d3c861f59bffbbb40426a7633e75.part /var/lib/nova/instances/_base/8b917e1e1f61d3c861f59bffbbb40426a7633e75.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:45:02 compute-0 nova_compute[189491]: 2025-12-01 09:45:02.966 189495 DEBUG oslo_concurrency.lockutils [req-f3c6639c-7e93-4b94-a528-f86848966346 req-2bb600f8-037a-46bf-92fe-3026e4c06d00 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Releasing lock "refresh_cache-332bb5cd-96b4-43a8-9d53-1d889d5e2df8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 09:45:03 compute-0 nova_compute[189491]: 2025-12-01 09:45:03.161 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:45:03 compute-0 nova_compute[189491]: 2025-12-01 09:45:03.242 189495 DEBUG oslo_concurrency.processutils [None req-2f487f36-1618-4066-8c19-ad1b32e4c962 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/8b917e1e1f61d3c861f59bffbbb40426a7633e75.part /var/lib/nova/instances/_base/8b917e1e1f61d3c861f59bffbbb40426a7633e75.converted" returned: 0 in 0.301s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:45:03 compute-0 nova_compute[189491]: 2025-12-01 09:45:03.248 189495 DEBUG oslo_concurrency.processutils [None req-2f487f36-1618-4066-8c19-ad1b32e4c962 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/8b917e1e1f61d3c861f59bffbbb40426a7633e75.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:45:03 compute-0 nova_compute[189491]: 2025-12-01 09:45:03.307 189495 DEBUG oslo_concurrency.processutils [None req-2f487f36-1618-4066-8c19-ad1b32e4c962 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/8b917e1e1f61d3c861f59bffbbb40426a7633e75.converted --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:45:03 compute-0 nova_compute[189491]: 2025-12-01 09:45:03.309 189495 DEBUG oslo_concurrency.lockutils [None req-2f487f36-1618-4066-8c19-ad1b32e4c962 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Lock "8b917e1e1f61d3c861f59bffbbb40426a7633e75" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 2.361s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:45:03 compute-0 nova_compute[189491]: 2025-12-01 09:45:03.333 189495 DEBUG nova.network.neutron [None req-2f487f36-1618-4066-8c19-ad1b32e4c962 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] [instance: dc0d510c-4baf-4bcb-ab4f-de6ee48849c0] Successfully updated port: e1536dee-e9fa-499f-9e7a-2b2a0ecce586 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  1 09:45:03 compute-0 nova_compute[189491]: 2025-12-01 09:45:03.335 189495 DEBUG oslo_concurrency.processutils [None req-2f487f36-1618-4066-8c19-ad1b32e4c962 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/8b917e1e1f61d3c861f59bffbbb40426a7633e75 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:45:03 compute-0 nova_compute[189491]: 2025-12-01 09:45:03.357 189495 DEBUG oslo_concurrency.lockutils [None req-2f487f36-1618-4066-8c19-ad1b32e4c962 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Acquiring lock "refresh_cache-dc0d510c-4baf-4bcb-ab4f-de6ee48849c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 09:45:03 compute-0 nova_compute[189491]: 2025-12-01 09:45:03.357 189495 DEBUG oslo_concurrency.lockutils [None req-2f487f36-1618-4066-8c19-ad1b32e4c962 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Acquired lock "refresh_cache-dc0d510c-4baf-4bcb-ab4f-de6ee48849c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 09:45:03 compute-0 nova_compute[189491]: 2025-12-01 09:45:03.358 189495 DEBUG nova.network.neutron [None req-2f487f36-1618-4066-8c19-ad1b32e4c962 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] [instance: dc0d510c-4baf-4bcb-ab4f-de6ee48849c0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  1 09:45:03 compute-0 nova_compute[189491]: 2025-12-01 09:45:03.395 189495 DEBUG oslo_concurrency.processutils [None req-2f487f36-1618-4066-8c19-ad1b32e4c962 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/8b917e1e1f61d3c861f59bffbbb40426a7633e75 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:45:03 compute-0 nova_compute[189491]: 2025-12-01 09:45:03.396 189495 DEBUG oslo_concurrency.lockutils [None req-2f487f36-1618-4066-8c19-ad1b32e4c962 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Acquiring lock "8b917e1e1f61d3c861f59bffbbb40426a7633e75" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:45:03 compute-0 nova_compute[189491]: 2025-12-01 09:45:03.396 189495 DEBUG oslo_concurrency.lockutils [None req-2f487f36-1618-4066-8c19-ad1b32e4c962 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Lock "8b917e1e1f61d3c861f59bffbbb40426a7633e75" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:45:03 compute-0 nova_compute[189491]: 2025-12-01 09:45:03.411 189495 DEBUG oslo_concurrency.processutils [None req-2f487f36-1618-4066-8c19-ad1b32e4c962 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/8b917e1e1f61d3c861f59bffbbb40426a7633e75 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:45:03 compute-0 nova_compute[189491]: 2025-12-01 09:45:03.476 189495 DEBUG oslo_concurrency.processutils [None req-2f487f36-1618-4066-8c19-ad1b32e4c962 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/8b917e1e1f61d3c861f59bffbbb40426a7633e75 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:45:03 compute-0 nova_compute[189491]: 2025-12-01 09:45:03.477 189495 DEBUG oslo_concurrency.processutils [None req-2f487f36-1618-4066-8c19-ad1b32e4c962 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/8b917e1e1f61d3c861f59bffbbb40426a7633e75,backing_fmt=raw /var/lib/nova/instances/dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:45:03 compute-0 nova_compute[189491]: 2025-12-01 09:45:03.523 189495 DEBUG nova.network.neutron [None req-2f487f36-1618-4066-8c19-ad1b32e4c962 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] [instance: dc0d510c-4baf-4bcb-ab4f-de6ee48849c0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  1 09:45:03 compute-0 nova_compute[189491]: 2025-12-01 09:45:03.527 189495 DEBUG oslo_concurrency.processutils [None req-2f487f36-1618-4066-8c19-ad1b32e4c962 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/8b917e1e1f61d3c861f59bffbbb40426a7633e75,backing_fmt=raw /var/lib/nova/instances/dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk 1073741824" returned: 0 in 0.050s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:45:03 compute-0 nova_compute[189491]: 2025-12-01 09:45:03.529 189495 DEBUG oslo_concurrency.lockutils [None req-2f487f36-1618-4066-8c19-ad1b32e4c962 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Lock "8b917e1e1f61d3c861f59bffbbb40426a7633e75" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.132s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:45:03 compute-0 nova_compute[189491]: 2025-12-01 09:45:03.529 189495 DEBUG oslo_concurrency.processutils [None req-2f487f36-1618-4066-8c19-ad1b32e4c962 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/8b917e1e1f61d3c861f59bffbbb40426a7633e75 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:45:03 compute-0 nova_compute[189491]: 2025-12-01 09:45:03.598 189495 DEBUG oslo_concurrency.processutils [None req-2f487f36-1618-4066-8c19-ad1b32e4c962 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/8b917e1e1f61d3c861f59bffbbb40426a7633e75 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:45:03 compute-0 nova_compute[189491]: 2025-12-01 09:45:03.599 189495 DEBUG nova.virt.disk.api [None req-2f487f36-1618-4066-8c19-ad1b32e4c962 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Checking if we can resize image /var/lib/nova/instances/dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Dec  1 09:45:03 compute-0 nova_compute[189491]: 2025-12-01 09:45:03.599 189495 DEBUG oslo_concurrency.processutils [None req-2f487f36-1618-4066-8c19-ad1b32e4c962 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:45:03 compute-0 nova_compute[189491]: 2025-12-01 09:45:03.688 189495 DEBUG oslo_concurrency.processutils [None req-2f487f36-1618-4066-8c19-ad1b32e4c962 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:45:03 compute-0 nova_compute[189491]: 2025-12-01 09:45:03.689 189495 DEBUG nova.virt.disk.api [None req-2f487f36-1618-4066-8c19-ad1b32e4c962 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Cannot resize image /var/lib/nova/instances/dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Dec  1 09:45:03 compute-0 nova_compute[189491]: 2025-12-01 09:45:03.689 189495 DEBUG nova.objects.instance [None req-2f487f36-1618-4066-8c19-ad1b32e4c962 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Lazy-loading 'migration_context' on Instance uuid dc0d510c-4baf-4bcb-ab4f-de6ee48849c0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 09:45:03 compute-0 nova_compute[189491]: 2025-12-01 09:45:03.712 189495 DEBUG nova.virt.libvirt.driver [None req-2f487f36-1618-4066-8c19-ad1b32e4c962 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] [instance: dc0d510c-4baf-4bcb-ab4f-de6ee48849c0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  1 09:45:03 compute-0 nova_compute[189491]: 2025-12-01 09:45:03.712 189495 DEBUG nova.virt.libvirt.driver [None req-2f487f36-1618-4066-8c19-ad1b32e4c962 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] [instance: dc0d510c-4baf-4bcb-ab4f-de6ee48849c0] Ensure instance console log exists: /var/lib/nova/instances/dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  1 09:45:03 compute-0 nova_compute[189491]: 2025-12-01 09:45:03.713 189495 DEBUG oslo_concurrency.lockutils [None req-2f487f36-1618-4066-8c19-ad1b32e4c962 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:45:03 compute-0 nova_compute[189491]: 2025-12-01 09:45:03.713 189495 DEBUG oslo_concurrency.lockutils [None req-2f487f36-1618-4066-8c19-ad1b32e4c962 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:45:03 compute-0 nova_compute[189491]: 2025-12-01 09:45:03.713 189495 DEBUG oslo_concurrency.lockutils [None req-2f487f36-1618-4066-8c19-ad1b32e4c962 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:45:03 compute-0 nova_compute[189491]: 2025-12-01 09:45:03.860 189495 DEBUG nova.network.neutron [-] [instance: 332bb5cd-96b4-43a8-9d53-1d889d5e2df8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 09:45:03 compute-0 nova_compute[189491]: 2025-12-01 09:45:03.896 189495 INFO nova.compute.manager [-] [instance: 332bb5cd-96b4-43a8-9d53-1d889d5e2df8] Took 1.78 seconds to deallocate network for instance.#033[00m
Dec  1 09:45:03 compute-0 nova_compute[189491]: 2025-12-01 09:45:03.942 189495 DEBUG oslo_concurrency.lockutils [None req-66e744e8-31c4-4cb9-a18e-90560c02e508 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:45:03 compute-0 nova_compute[189491]: 2025-12-01 09:45:03.943 189495 DEBUG oslo_concurrency.lockutils [None req-66e744e8-31c4-4cb9-a18e-90560c02e508 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:45:03 compute-0 nova_compute[189491]: 2025-12-01 09:45:03.945 189495 DEBUG nova.compute.manager [req-70e1f68f-205b-4e68-91a8-9834f28f2595 req-c6e407c7-bbf5-4bb9-bb6a-36144553caf3 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 332bb5cd-96b4-43a8-9d53-1d889d5e2df8] Received event network-vif-deleted-39057be4-bfdf-4611-a03e-05cf570b079d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 09:45:04 compute-0 nova_compute[189491]: 2025-12-01 09:45:04.106 189495 DEBUG nova.compute.provider_tree [None req-66e744e8-31c4-4cb9-a18e-90560c02e508 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] Inventory has not changed in ProviderTree for provider: 143c7fe7-af1f-477a-978c-6a994d785d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 09:45:04 compute-0 nova_compute[189491]: 2025-12-01 09:45:04.129 189495 DEBUG nova.scheduler.client.report [None req-66e744e8-31c4-4cb9-a18e-90560c02e508 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] Inventory has not changed for provider 143c7fe7-af1f-477a-978c-6a994d785d98 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 09:45:04 compute-0 nova_compute[189491]: 2025-12-01 09:45:04.152 189495 DEBUG oslo_concurrency.lockutils [None req-66e744e8-31c4-4cb9-a18e-90560c02e508 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.209s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:45:04 compute-0 nova_compute[189491]: 2025-12-01 09:45:04.184 189495 INFO nova.scheduler.client.report [None req-66e744e8-31c4-4cb9-a18e-90560c02e508 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] Deleted allocations for instance 332bb5cd-96b4-43a8-9d53-1d889d5e2df8#033[00m
Dec  1 09:45:04 compute-0 nova_compute[189491]: 2025-12-01 09:45:04.284 189495 DEBUG oslo_concurrency.lockutils [None req-66e744e8-31c4-4cb9-a18e-90560c02e508 22ae22ecd4ce4774b704b3aa723962b8 104bf2f5f6f1439e9fc460940d474ff7 - - default default] Lock "332bb5cd-96b4-43a8-9d53-1d889d5e2df8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.677s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:45:04 compute-0 nova_compute[189491]: 2025-12-01 09:45:04.623 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:45:04 compute-0 nova_compute[189491]: 2025-12-01 09:45:04.746 189495 DEBUG nova.network.neutron [None req-2f487f36-1618-4066-8c19-ad1b32e4c962 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] [instance: dc0d510c-4baf-4bcb-ab4f-de6ee48849c0] Updating instance_info_cache with network_info: [{"id": "e1536dee-e9fa-499f-9e7a-2b2a0ecce586", "address": "fa:16:3e:50:a8:e2", "network": {"id": "cf0577af-a5ed-496f-aa24-ae4d86898e85", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.156", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d5294cc5ac64b22a4a0f770b8d8bc61", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1536dee-e9", "ovs_interfaceid": "e1536dee-e9fa-499f-9e7a-2b2a0ecce586", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 09:45:04 compute-0 nova_compute[189491]: 2025-12-01 09:45:04.772 189495 DEBUG nova.compute.manager [req-b5100c73-71c9-4315-b93c-6642fd5cafa6 req-a1ed9997-4718-4ee1-8561-daccadf10473 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 332bb5cd-96b4-43a8-9d53-1d889d5e2df8] Received event network-vif-plugged-39057be4-bfdf-4611-a03e-05cf570b079d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 09:45:04 compute-0 nova_compute[189491]: 2025-12-01 09:45:04.772 189495 DEBUG oslo_concurrency.lockutils [req-b5100c73-71c9-4315-b93c-6642fd5cafa6 req-a1ed9997-4718-4ee1-8561-daccadf10473 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Acquiring lock "332bb5cd-96b4-43a8-9d53-1d889d5e2df8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:45:04 compute-0 nova_compute[189491]: 2025-12-01 09:45:04.772 189495 DEBUG oslo_concurrency.lockutils [req-b5100c73-71c9-4315-b93c-6642fd5cafa6 req-a1ed9997-4718-4ee1-8561-daccadf10473 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Lock "332bb5cd-96b4-43a8-9d53-1d889d5e2df8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:45:04 compute-0 nova_compute[189491]: 2025-12-01 09:45:04.773 189495 DEBUG oslo_concurrency.lockutils [req-b5100c73-71c9-4315-b93c-6642fd5cafa6 req-a1ed9997-4718-4ee1-8561-daccadf10473 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Lock "332bb5cd-96b4-43a8-9d53-1d889d5e2df8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:45:04 compute-0 nova_compute[189491]: 2025-12-01 09:45:04.773 189495 DEBUG nova.compute.manager [req-b5100c73-71c9-4315-b93c-6642fd5cafa6 req-a1ed9997-4718-4ee1-8561-daccadf10473 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 332bb5cd-96b4-43a8-9d53-1d889d5e2df8] No waiting events found dispatching network-vif-plugged-39057be4-bfdf-4611-a03e-05cf570b079d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 09:45:04 compute-0 nova_compute[189491]: 2025-12-01 09:45:04.773 189495 WARNING nova.compute.manager [req-b5100c73-71c9-4315-b93c-6642fd5cafa6 req-a1ed9997-4718-4ee1-8561-daccadf10473 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 332bb5cd-96b4-43a8-9d53-1d889d5e2df8] Received unexpected event network-vif-plugged-39057be4-bfdf-4611-a03e-05cf570b079d for instance with vm_state deleted and task_state None.#033[00m
Dec  1 09:45:04 compute-0 nova_compute[189491]: 2025-12-01 09:45:04.773 189495 DEBUG nova.compute.manager [req-b5100c73-71c9-4315-b93c-6642fd5cafa6 req-a1ed9997-4718-4ee1-8561-daccadf10473 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: dc0d510c-4baf-4bcb-ab4f-de6ee48849c0] Received event network-changed-e1536dee-e9fa-499f-9e7a-2b2a0ecce586 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 09:45:04 compute-0 nova_compute[189491]: 2025-12-01 09:45:04.773 189495 DEBUG nova.compute.manager [req-b5100c73-71c9-4315-b93c-6642fd5cafa6 req-a1ed9997-4718-4ee1-8561-daccadf10473 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: dc0d510c-4baf-4bcb-ab4f-de6ee48849c0] Refreshing instance network info cache due to event network-changed-e1536dee-e9fa-499f-9e7a-2b2a0ecce586. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  1 09:45:04 compute-0 nova_compute[189491]: 2025-12-01 09:45:04.774 189495 DEBUG oslo_concurrency.lockutils [req-b5100c73-71c9-4315-b93c-6642fd5cafa6 req-a1ed9997-4718-4ee1-8561-daccadf10473 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Acquiring lock "refresh_cache-dc0d510c-4baf-4bcb-ab4f-de6ee48849c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 09:45:04 compute-0 nova_compute[189491]: 2025-12-01 09:45:04.776 189495 DEBUG oslo_concurrency.lockutils [None req-2f487f36-1618-4066-8c19-ad1b32e4c962 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Releasing lock "refresh_cache-dc0d510c-4baf-4bcb-ab4f-de6ee48849c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 09:45:04 compute-0 nova_compute[189491]: 2025-12-01 09:45:04.776 189495 DEBUG nova.compute.manager [None req-2f487f36-1618-4066-8c19-ad1b32e4c962 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] [instance: dc0d510c-4baf-4bcb-ab4f-de6ee48849c0] Instance network_info: |[{"id": "e1536dee-e9fa-499f-9e7a-2b2a0ecce586", "address": "fa:16:3e:50:a8:e2", "network": {"id": "cf0577af-a5ed-496f-aa24-ae4d86898e85", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.156", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d5294cc5ac64b22a4a0f770b8d8bc61", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1536dee-e9", "ovs_interfaceid": "e1536dee-e9fa-499f-9e7a-2b2a0ecce586", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  1 09:45:04 compute-0 nova_compute[189491]: 2025-12-01 09:45:04.777 189495 DEBUG oslo_concurrency.lockutils [req-b5100c73-71c9-4315-b93c-6642fd5cafa6 req-a1ed9997-4718-4ee1-8561-daccadf10473 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Acquired lock "refresh_cache-dc0d510c-4baf-4bcb-ab4f-de6ee48849c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 09:45:04 compute-0 nova_compute[189491]: 2025-12-01 09:45:04.777 189495 DEBUG nova.network.neutron [req-b5100c73-71c9-4315-b93c-6642fd5cafa6 req-a1ed9997-4718-4ee1-8561-daccadf10473 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: dc0d510c-4baf-4bcb-ab4f-de6ee48849c0] Refreshing network info cache for port e1536dee-e9fa-499f-9e7a-2b2a0ecce586 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  1 09:45:04 compute-0 nova_compute[189491]: 2025-12-01 09:45:04.779 189495 DEBUG nova.virt.libvirt.driver [None req-2f487f36-1618-4066-8c19-ad1b32e4c962 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] [instance: dc0d510c-4baf-4bcb-ab4f-de6ee48849c0] Start _get_guest_xml network_info=[{"id": "e1536dee-e9fa-499f-9e7a-2b2a0ecce586", "address": "fa:16:3e:50:a8:e2", "network": {"id": "cf0577af-a5ed-496f-aa24-ae4d86898e85", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.156", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d5294cc5ac64b22a4a0f770b8d8bc61", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1536dee-e9", "ovs_interfaceid": "e1536dee-e9fa-499f-9e7a-2b2a0ecce586", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-01T09:44:52Z,direct_url=<?>,disk_format='qcow2',id=280f4e4d-4a12-4164-a687-6106a9afc7fe,min_disk=0,min_ram=0,name='tempest-scenario-img--1642109444',owner='6d5294cc5ac64b22a4a0f770b8d8bc61',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-01T09:44:53Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'guest_format': None, 'encryption_options': None, 'device_type': 'disk', 'encryption_secret_uuid': None, 'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'image_id': '280f4e4d-4a12-4164-a687-6106a9afc7fe'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  1 09:45:04 compute-0 nova_compute[189491]: 2025-12-01 09:45:04.785 189495 WARNING nova.virt.libvirt.driver [None req-2f487f36-1618-4066-8c19-ad1b32e4c962 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 09:45:04 compute-0 nova_compute[189491]: 2025-12-01 09:45:04.797 189495 DEBUG nova.virt.libvirt.host [None req-2f487f36-1618-4066-8c19-ad1b32e4c962 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  1 09:45:04 compute-0 nova_compute[189491]: 2025-12-01 09:45:04.797 189495 DEBUG nova.virt.libvirt.host [None req-2f487f36-1618-4066-8c19-ad1b32e4c962 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  1 09:45:04 compute-0 nova_compute[189491]: 2025-12-01 09:45:04.803 189495 DEBUG nova.virt.libvirt.host [None req-2f487f36-1618-4066-8c19-ad1b32e4c962 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  1 09:45:04 compute-0 nova_compute[189491]: 2025-12-01 09:45:04.804 189495 DEBUG nova.virt.libvirt.host [None req-2f487f36-1618-4066-8c19-ad1b32e4c962 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  1 09:45:04 compute-0 nova_compute[189491]: 2025-12-01 09:45:04.804 189495 DEBUG nova.virt.libvirt.driver [None req-2f487f36-1618-4066-8c19-ad1b32e4c962 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  1 09:45:04 compute-0 nova_compute[189491]: 2025-12-01 09:45:04.804 189495 DEBUG nova.virt.hardware [None req-2f487f36-1618-4066-8c19-ad1b32e4c962 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-01T09:41:32Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='422f041c-a187-4aa2-8167-37f3eb0e89c2',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-01T09:44:52Z,direct_url=<?>,disk_format='qcow2',id=280f4e4d-4a12-4164-a687-6106a9afc7fe,min_disk=0,min_ram=0,name='tempest-scenario-img--1642109444',owner='6d5294cc5ac64b22a4a0f770b8d8bc61',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-01T09:44:53Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  1 09:45:04 compute-0 nova_compute[189491]: 2025-12-01 09:45:04.805 189495 DEBUG nova.virt.hardware [None req-2f487f36-1618-4066-8c19-ad1b32e4c962 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  1 09:45:04 compute-0 nova_compute[189491]: 2025-12-01 09:45:04.805 189495 DEBUG nova.virt.hardware [None req-2f487f36-1618-4066-8c19-ad1b32e4c962 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  1 09:45:04 compute-0 nova_compute[189491]: 2025-12-01 09:45:04.805 189495 DEBUG nova.virt.hardware [None req-2f487f36-1618-4066-8c19-ad1b32e4c962 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  1 09:45:04 compute-0 nova_compute[189491]: 2025-12-01 09:45:04.805 189495 DEBUG nova.virt.hardware [None req-2f487f36-1618-4066-8c19-ad1b32e4c962 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  1 09:45:04 compute-0 nova_compute[189491]: 2025-12-01 09:45:04.805 189495 DEBUG nova.virt.hardware [None req-2f487f36-1618-4066-8c19-ad1b32e4c962 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  1 09:45:04 compute-0 nova_compute[189491]: 2025-12-01 09:45:04.805 189495 DEBUG nova.virt.hardware [None req-2f487f36-1618-4066-8c19-ad1b32e4c962 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  1 09:45:04 compute-0 nova_compute[189491]: 2025-12-01 09:45:04.806 189495 DEBUG nova.virt.hardware [None req-2f487f36-1618-4066-8c19-ad1b32e4c962 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  1 09:45:04 compute-0 nova_compute[189491]: 2025-12-01 09:45:04.806 189495 DEBUG nova.virt.hardware [None req-2f487f36-1618-4066-8c19-ad1b32e4c962 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  1 09:45:04 compute-0 nova_compute[189491]: 2025-12-01 09:45:04.806 189495 DEBUG nova.virt.hardware [None req-2f487f36-1618-4066-8c19-ad1b32e4c962 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  1 09:45:04 compute-0 nova_compute[189491]: 2025-12-01 09:45:04.806 189495 DEBUG nova.virt.hardware [None req-2f487f36-1618-4066-8c19-ad1b32e4c962 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  1 09:45:04 compute-0 nova_compute[189491]: 2025-12-01 09:45:04.809 189495 DEBUG nova.virt.libvirt.vif [None req-2f487f36-1618-4066-8c19-ad1b32e4c962 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-01T09:44:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='te-8664732-asg-zzzrimsgcaeu-gnecnnuukmep-lujrpewlzjs2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-8664732-asg-zzzrimsgcaeu-gnecnnuukmep-lujrpewlzjs2',id=11,image_ref='280f4e4d-4a12-4164-a687-6106a9afc7fe',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='e03937ad-4d2d-4edc-9b33-ed8d878566ca'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6d5294cc5ac64b22a4a0f770b8d8bc61',ramdisk_id='',reservation_id='r-flgn0x2j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='280f4e4d-4a12-4164-a687-6106a9afc7fe',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-PrometheusGabbiTest-1348038279',owner_user_name='tempest-PrometheusGabbiTest-1348038279-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-01T09:45:00Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='c54f3a4a232b4a739be88e97f2094d4f',uuid=dc0d510c-4baf-4bcb-ab4f-de6ee48849c0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e1536dee-e9fa-499f-9e7a-2b2a0ecce586", "address": "fa:16:3e:50:a8:e2", "network": {"id": "cf0577af-a5ed-496f-aa24-ae4d86898e85", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.156", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d5294cc5ac64b22a4a0f770b8d8bc61", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1536dee-e9", "ovs_interfaceid": "e1536dee-e9fa-499f-9e7a-2b2a0ecce586", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  1 09:45:04 compute-0 nova_compute[189491]: 2025-12-01 09:45:04.809 189495 DEBUG nova.network.os_vif_util [None req-2f487f36-1618-4066-8c19-ad1b32e4c962 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Converting VIF {"id": "e1536dee-e9fa-499f-9e7a-2b2a0ecce586", "address": "fa:16:3e:50:a8:e2", "network": {"id": "cf0577af-a5ed-496f-aa24-ae4d86898e85", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.156", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d5294cc5ac64b22a4a0f770b8d8bc61", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1536dee-e9", "ovs_interfaceid": "e1536dee-e9fa-499f-9e7a-2b2a0ecce586", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 09:45:04 compute-0 nova_compute[189491]: 2025-12-01 09:45:04.810 189495 DEBUG nova.network.os_vif_util [None req-2f487f36-1618-4066-8c19-ad1b32e4c962 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:50:a8:e2,bridge_name='br-int',has_traffic_filtering=True,id=e1536dee-e9fa-499f-9e7a-2b2a0ecce586,network=Network(cf0577af-a5ed-496f-aa24-ae4d86898e85),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape1536dee-e9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 09:45:04 compute-0 nova_compute[189491]: 2025-12-01 09:45:04.810 189495 DEBUG nova.objects.instance [None req-2f487f36-1618-4066-8c19-ad1b32e4c962 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Lazy-loading 'pci_devices' on Instance uuid dc0d510c-4baf-4bcb-ab4f-de6ee48849c0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 09:45:04 compute-0 nova_compute[189491]: 2025-12-01 09:45:04.846 189495 DEBUG nova.virt.libvirt.driver [None req-2f487f36-1618-4066-8c19-ad1b32e4c962 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] [instance: dc0d510c-4baf-4bcb-ab4f-de6ee48849c0] End _get_guest_xml xml=<domain type="kvm">
Dec  1 09:45:04 compute-0 nova_compute[189491]:  <uuid>dc0d510c-4baf-4bcb-ab4f-de6ee48849c0</uuid>
Dec  1 09:45:04 compute-0 nova_compute[189491]:  <name>instance-0000000b</name>
Dec  1 09:45:04 compute-0 nova_compute[189491]:  <memory>131072</memory>
Dec  1 09:45:04 compute-0 nova_compute[189491]:  <vcpu>1</vcpu>
Dec  1 09:45:04 compute-0 nova_compute[189491]:  <metadata>
Dec  1 09:45:04 compute-0 nova_compute[189491]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  1 09:45:04 compute-0 nova_compute[189491]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  1 09:45:04 compute-0 nova_compute[189491]:      <nova:name>te-8664732-asg-zzzrimsgcaeu-gnecnnuukmep-lujrpewlzjs2</nova:name>
Dec  1 09:45:04 compute-0 nova_compute[189491]:      <nova:creationTime>2025-12-01 09:45:04</nova:creationTime>
Dec  1 09:45:04 compute-0 nova_compute[189491]:      <nova:flavor name="m1.nano">
Dec  1 09:45:04 compute-0 nova_compute[189491]:        <nova:memory>128</nova:memory>
Dec  1 09:45:04 compute-0 nova_compute[189491]:        <nova:disk>1</nova:disk>
Dec  1 09:45:04 compute-0 nova_compute[189491]:        <nova:swap>0</nova:swap>
Dec  1 09:45:04 compute-0 nova_compute[189491]:        <nova:ephemeral>0</nova:ephemeral>
Dec  1 09:45:04 compute-0 nova_compute[189491]:        <nova:vcpus>1</nova:vcpus>
Dec  1 09:45:04 compute-0 nova_compute[189491]:      </nova:flavor>
Dec  1 09:45:04 compute-0 nova_compute[189491]:      <nova:owner>
Dec  1 09:45:04 compute-0 nova_compute[189491]:        <nova:user uuid="c54f3a4a232b4a739be88e97f2094d4f">tempest-PrometheusGabbiTest-1348038279-project-member</nova:user>
Dec  1 09:45:04 compute-0 nova_compute[189491]:        <nova:project uuid="6d5294cc5ac64b22a4a0f770b8d8bc61">tempest-PrometheusGabbiTest-1348038279</nova:project>
Dec  1 09:45:04 compute-0 nova_compute[189491]:      </nova:owner>
Dec  1 09:45:04 compute-0 nova_compute[189491]:      <nova:root type="image" uuid="280f4e4d-4a12-4164-a687-6106a9afc7fe"/>
Dec  1 09:45:04 compute-0 nova_compute[189491]:      <nova:ports>
Dec  1 09:45:04 compute-0 nova_compute[189491]:        <nova:port uuid="e1536dee-e9fa-499f-9e7a-2b2a0ecce586">
Dec  1 09:45:04 compute-0 nova_compute[189491]:          <nova:ip type="fixed" address="10.100.0.156" ipVersion="4"/>
Dec  1 09:45:04 compute-0 nova_compute[189491]:        </nova:port>
Dec  1 09:45:04 compute-0 nova_compute[189491]:      </nova:ports>
Dec  1 09:45:04 compute-0 nova_compute[189491]:    </nova:instance>
Dec  1 09:45:04 compute-0 nova_compute[189491]:  </metadata>
Dec  1 09:45:04 compute-0 nova_compute[189491]:  <sysinfo type="smbios">
Dec  1 09:45:04 compute-0 nova_compute[189491]:    <system>
Dec  1 09:45:04 compute-0 nova_compute[189491]:      <entry name="manufacturer">RDO</entry>
Dec  1 09:45:04 compute-0 nova_compute[189491]:      <entry name="product">OpenStack Compute</entry>
Dec  1 09:45:04 compute-0 nova_compute[189491]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  1 09:45:04 compute-0 nova_compute[189491]:      <entry name="serial">dc0d510c-4baf-4bcb-ab4f-de6ee48849c0</entry>
Dec  1 09:45:04 compute-0 nova_compute[189491]:      <entry name="uuid">dc0d510c-4baf-4bcb-ab4f-de6ee48849c0</entry>
Dec  1 09:45:04 compute-0 nova_compute[189491]:      <entry name="family">Virtual Machine</entry>
Dec  1 09:45:04 compute-0 nova_compute[189491]:    </system>
Dec  1 09:45:04 compute-0 nova_compute[189491]:  </sysinfo>
Dec  1 09:45:04 compute-0 nova_compute[189491]:  <os>
Dec  1 09:45:04 compute-0 nova_compute[189491]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  1 09:45:04 compute-0 nova_compute[189491]:    <boot dev="hd"/>
Dec  1 09:45:04 compute-0 nova_compute[189491]:    <smbios mode="sysinfo"/>
Dec  1 09:45:04 compute-0 nova_compute[189491]:  </os>
Dec  1 09:45:04 compute-0 nova_compute[189491]:  <features>
Dec  1 09:45:04 compute-0 nova_compute[189491]:    <acpi/>
Dec  1 09:45:04 compute-0 nova_compute[189491]:    <apic/>
Dec  1 09:45:04 compute-0 nova_compute[189491]:    <vmcoreinfo/>
Dec  1 09:45:04 compute-0 nova_compute[189491]:  </features>
Dec  1 09:45:04 compute-0 nova_compute[189491]:  <clock offset="utc">
Dec  1 09:45:04 compute-0 nova_compute[189491]:    <timer name="pit" tickpolicy="delay"/>
Dec  1 09:45:04 compute-0 nova_compute[189491]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  1 09:45:04 compute-0 nova_compute[189491]:    <timer name="hpet" present="no"/>
Dec  1 09:45:04 compute-0 nova_compute[189491]:  </clock>
Dec  1 09:45:04 compute-0 nova_compute[189491]:  <cpu mode="host-model" match="exact">
Dec  1 09:45:04 compute-0 nova_compute[189491]:    <topology sockets="1" cores="1" threads="1"/>
Dec  1 09:45:04 compute-0 nova_compute[189491]:  </cpu>
Dec  1 09:45:04 compute-0 nova_compute[189491]:  <devices>
Dec  1 09:45:04 compute-0 nova_compute[189491]:    <disk type="file" device="disk">
Dec  1 09:45:04 compute-0 nova_compute[189491]:      <driver name="qemu" type="qcow2" cache="none"/>
Dec  1 09:45:04 compute-0 nova_compute[189491]:      <source file="/var/lib/nova/instances/dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk"/>
Dec  1 09:45:04 compute-0 nova_compute[189491]:      <target dev="vda" bus="virtio"/>
Dec  1 09:45:04 compute-0 nova_compute[189491]:    </disk>
Dec  1 09:45:04 compute-0 nova_compute[189491]:    <disk type="file" device="cdrom">
Dec  1 09:45:04 compute-0 nova_compute[189491]:      <driver name="qemu" type="raw" cache="none"/>
Dec  1 09:45:04 compute-0 nova_compute[189491]:      <source file="/var/lib/nova/instances/dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.config"/>
Dec  1 09:45:04 compute-0 nova_compute[189491]:      <target dev="sda" bus="sata"/>
Dec  1 09:45:04 compute-0 nova_compute[189491]:    </disk>
Dec  1 09:45:04 compute-0 nova_compute[189491]:    <interface type="ethernet">
Dec  1 09:45:04 compute-0 nova_compute[189491]:      <mac address="fa:16:3e:50:a8:e2"/>
Dec  1 09:45:04 compute-0 nova_compute[189491]:      <model type="virtio"/>
Dec  1 09:45:04 compute-0 nova_compute[189491]:      <driver name="vhost" rx_queue_size="512"/>
Dec  1 09:45:04 compute-0 nova_compute[189491]:      <mtu size="1442"/>
Dec  1 09:45:04 compute-0 nova_compute[189491]:      <target dev="tape1536dee-e9"/>
Dec  1 09:45:04 compute-0 nova_compute[189491]:    </interface>
Dec  1 09:45:04 compute-0 nova_compute[189491]:    <serial type="pty">
Dec  1 09:45:04 compute-0 nova_compute[189491]:      <log file="/var/lib/nova/instances/dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/console.log" append="off"/>
Dec  1 09:45:04 compute-0 nova_compute[189491]:    </serial>
Dec  1 09:45:04 compute-0 nova_compute[189491]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  1 09:45:04 compute-0 nova_compute[189491]:    <video>
Dec  1 09:45:04 compute-0 nova_compute[189491]:      <model type="virtio"/>
Dec  1 09:45:04 compute-0 nova_compute[189491]:    </video>
Dec  1 09:45:04 compute-0 nova_compute[189491]:    <input type="tablet" bus="usb"/>
Dec  1 09:45:04 compute-0 nova_compute[189491]:    <rng model="virtio">
Dec  1 09:45:04 compute-0 nova_compute[189491]:      <backend model="random">/dev/urandom</backend>
Dec  1 09:45:04 compute-0 nova_compute[189491]:    </rng>
Dec  1 09:45:04 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root"/>
Dec  1 09:45:04 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:45:04 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:45:04 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:45:04 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:45:04 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:45:04 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:45:04 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:45:04 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:45:04 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:45:04 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:45:04 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:45:04 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:45:04 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:45:04 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:45:04 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:45:04 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:45:04 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:45:04 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:45:04 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:45:04 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:45:04 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:45:04 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:45:04 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:45:04 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:45:04 compute-0 nova_compute[189491]:    <controller type="usb" index="0"/>
Dec  1 09:45:04 compute-0 nova_compute[189491]:    <memballoon model="virtio">
Dec  1 09:45:04 compute-0 nova_compute[189491]:      <stats period="10"/>
Dec  1 09:45:04 compute-0 nova_compute[189491]:    </memballoon>
Dec  1 09:45:04 compute-0 nova_compute[189491]:  </devices>
Dec  1 09:45:04 compute-0 nova_compute[189491]: </domain>
Dec  1 09:45:04 compute-0 nova_compute[189491]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  1 09:45:04 compute-0 nova_compute[189491]: 2025-12-01 09:45:04.846 189495 DEBUG nova.compute.manager [None req-2f487f36-1618-4066-8c19-ad1b32e4c962 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] [instance: dc0d510c-4baf-4bcb-ab4f-de6ee48849c0] Preparing to wait for external event network-vif-plugged-e1536dee-e9fa-499f-9e7a-2b2a0ecce586 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  1 09:45:04 compute-0 nova_compute[189491]: 2025-12-01 09:45:04.846 189495 DEBUG oslo_concurrency.lockutils [None req-2f487f36-1618-4066-8c19-ad1b32e4c962 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Acquiring lock "dc0d510c-4baf-4bcb-ab4f-de6ee48849c0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:45:04 compute-0 nova_compute[189491]: 2025-12-01 09:45:04.846 189495 DEBUG oslo_concurrency.lockutils [None req-2f487f36-1618-4066-8c19-ad1b32e4c962 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Lock "dc0d510c-4baf-4bcb-ab4f-de6ee48849c0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:45:04 compute-0 nova_compute[189491]: 2025-12-01 09:45:04.847 189495 DEBUG oslo_concurrency.lockutils [None req-2f487f36-1618-4066-8c19-ad1b32e4c962 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Lock "dc0d510c-4baf-4bcb-ab4f-de6ee48849c0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:45:04 compute-0 nova_compute[189491]: 2025-12-01 09:45:04.847 189495 DEBUG nova.virt.libvirt.vif [None req-2f487f36-1618-4066-8c19-ad1b32e4c962 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-01T09:44:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='te-8664732-asg-zzzrimsgcaeu-gnecnnuukmep-lujrpewlzjs2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-8664732-asg-zzzrimsgcaeu-gnecnnuukmep-lujrpewlzjs2',id=11,image_ref='280f4e4d-4a12-4164-a687-6106a9afc7fe',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='e03937ad-4d2d-4edc-9b33-ed8d878566ca'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6d5294cc5ac64b22a4a0f770b8d8bc61',ramdisk_id='',reservation_id='r-flgn0x2j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='280f4e4d-4a12-4164-a687-6106a9afc7fe',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-PrometheusGabbiTest-1348038279',owner_user_name='tempest-PrometheusGabbiTest-1348038279-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-01T09:45:00Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='c54f3a4a232b4a739be88e97f2094d4f',uuid=dc0d510c-4baf-4bcb-ab4f-de6ee48849c0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e1536dee-e9fa-499f-9e7a-2b2a0ecce586", "address": "fa:16:3e:50:a8:e2", "network": {"id": "cf0577af-a5ed-496f-aa24-ae4d86898e85", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.156", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d5294cc5ac64b22a4a0f770b8d8bc61", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1536dee-e9", "ovs_interfaceid": "e1536dee-e9fa-499f-9e7a-2b2a0ecce586", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  1 09:45:04 compute-0 nova_compute[189491]: 2025-12-01 09:45:04.847 189495 DEBUG nova.network.os_vif_util [None req-2f487f36-1618-4066-8c19-ad1b32e4c962 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Converting VIF {"id": "e1536dee-e9fa-499f-9e7a-2b2a0ecce586", "address": "fa:16:3e:50:a8:e2", "network": {"id": "cf0577af-a5ed-496f-aa24-ae4d86898e85", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.156", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d5294cc5ac64b22a4a0f770b8d8bc61", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1536dee-e9", "ovs_interfaceid": "e1536dee-e9fa-499f-9e7a-2b2a0ecce586", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 09:45:04 compute-0 nova_compute[189491]: 2025-12-01 09:45:04.848 189495 DEBUG nova.network.os_vif_util [None req-2f487f36-1618-4066-8c19-ad1b32e4c962 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:50:a8:e2,bridge_name='br-int',has_traffic_filtering=True,id=e1536dee-e9fa-499f-9e7a-2b2a0ecce586,network=Network(cf0577af-a5ed-496f-aa24-ae4d86898e85),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape1536dee-e9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 09:45:04 compute-0 nova_compute[189491]: 2025-12-01 09:45:04.848 189495 DEBUG os_vif [None req-2f487f36-1618-4066-8c19-ad1b32e4c962 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:50:a8:e2,bridge_name='br-int',has_traffic_filtering=True,id=e1536dee-e9fa-499f-9e7a-2b2a0ecce586,network=Network(cf0577af-a5ed-496f-aa24-ae4d86898e85),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape1536dee-e9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  1 09:45:04 compute-0 nova_compute[189491]: 2025-12-01 09:45:04.848 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:45:04 compute-0 nova_compute[189491]: 2025-12-01 09:45:04.849 189495 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:45:04 compute-0 nova_compute[189491]: 2025-12-01 09:45:04.849 189495 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 09:45:04 compute-0 nova_compute[189491]: 2025-12-01 09:45:04.852 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:45:04 compute-0 nova_compute[189491]: 2025-12-01 09:45:04.853 189495 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape1536dee-e9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:45:04 compute-0 nova_compute[189491]: 2025-12-01 09:45:04.853 189495 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape1536dee-e9, col_values=(('external_ids', {'iface-id': 'e1536dee-e9fa-499f-9e7a-2b2a0ecce586', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:50:a8:e2', 'vm-uuid': 'dc0d510c-4baf-4bcb-ab4f-de6ee48849c0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:45:04 compute-0 NetworkManager[56318]: <info>  [1764582304.8563] manager: (tape1536dee-e9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/59)
Dec  1 09:45:04 compute-0 nova_compute[189491]: 2025-12-01 09:45:04.856 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  1 09:45:04 compute-0 nova_compute[189491]: 2025-12-01 09:45:04.866 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:45:04 compute-0 nova_compute[189491]: 2025-12-01 09:45:04.868 189495 INFO os_vif [None req-2f487f36-1618-4066-8c19-ad1b32e4c962 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:50:a8:e2,bridge_name='br-int',has_traffic_filtering=True,id=e1536dee-e9fa-499f-9e7a-2b2a0ecce586,network=Network(cf0577af-a5ed-496f-aa24-ae4d86898e85),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape1536dee-e9')#033[00m
Dec  1 09:45:04 compute-0 nova_compute[189491]: 2025-12-01 09:45:04.957 189495 DEBUG nova.virt.libvirt.driver [None req-2f487f36-1618-4066-8c19-ad1b32e4c962 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  1 09:45:04 compute-0 nova_compute[189491]: 2025-12-01 09:45:04.958 189495 DEBUG nova.virt.libvirt.driver [None req-2f487f36-1618-4066-8c19-ad1b32e4c962 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  1 09:45:04 compute-0 nova_compute[189491]: 2025-12-01 09:45:04.958 189495 DEBUG nova.virt.libvirt.driver [None req-2f487f36-1618-4066-8c19-ad1b32e4c962 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] No VIF found with MAC fa:16:3e:50:a8:e2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  1 09:45:04 compute-0 nova_compute[189491]: 2025-12-01 09:45:04.959 189495 INFO nova.virt.libvirt.driver [None req-2f487f36-1618-4066-8c19-ad1b32e4c962 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] [instance: dc0d510c-4baf-4bcb-ab4f-de6ee48849c0] Using config drive#033[00m
Dec  1 09:45:05 compute-0 nova_compute[189491]: 2025-12-01 09:45:05.352 189495 INFO nova.virt.libvirt.driver [None req-2f487f36-1618-4066-8c19-ad1b32e4c962 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] [instance: dc0d510c-4baf-4bcb-ab4f-de6ee48849c0] Creating config drive at /var/lib/nova/instances/dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.config#033[00m
Dec  1 09:45:05 compute-0 nova_compute[189491]: 2025-12-01 09:45:05.358 189495 DEBUG oslo_concurrency.processutils [None req-2f487f36-1618-4066-8c19-ad1b32e4c962 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqze74pw9 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:45:05 compute-0 nova_compute[189491]: 2025-12-01 09:45:05.489 189495 DEBUG oslo_concurrency.processutils [None req-2f487f36-1618-4066-8c19-ad1b32e4c962 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqze74pw9" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:45:05 compute-0 kernel: tape1536dee-e9: entered promiscuous mode
Dec  1 09:45:05 compute-0 NetworkManager[56318]: <info>  [1764582305.5869] manager: (tape1536dee-e9): new Tun device (/org/freedesktop/NetworkManager/Devices/60)
Dec  1 09:45:05 compute-0 nova_compute[189491]: 2025-12-01 09:45:05.595 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:45:05 compute-0 ovn_controller[97794]: 2025-12-01T09:45:05Z|00118|binding|INFO|Claiming lport e1536dee-e9fa-499f-9e7a-2b2a0ecce586 for this chassis.
Dec  1 09:45:05 compute-0 ovn_controller[97794]: 2025-12-01T09:45:05Z|00119|binding|INFO|e1536dee-e9fa-499f-9e7a-2b2a0ecce586: Claiming fa:16:3e:50:a8:e2 10.100.0.156
Dec  1 09:45:05 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:45:05.619 106659 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:50:a8:e2 10.100.0.156'], port_security=['fa:16:3e:50:a8:e2 10.100.0.156'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.156/16', 'neutron:device_id': 'dc0d510c-4baf-4bcb-ab4f-de6ee48849c0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cf0577af-a5ed-496f-aa24-ae4d86898e85', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6d5294cc5ac64b22a4a0f770b8d8bc61', 'neutron:revision_number': '2', 'neutron:security_group_ids': '43f98091-3f01-4ffd-9cb2-02d78ab9f60c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0c2dbc4a-f4e0-49c5-bb92-4872f344781e, chassis=[<ovs.db.idl.Row object at 0x7ff7f12c3670>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff7f12c3670>], logical_port=e1536dee-e9fa-499f-9e7a-2b2a0ecce586) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 09:45:05 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:45:05.621 106659 INFO neutron.agent.ovn.metadata.agent [-] Port e1536dee-e9fa-499f-9e7a-2b2a0ecce586 in datapath cf0577af-a5ed-496f-aa24-ae4d86898e85 bound to our chassis#033[00m
Dec  1 09:45:05 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:45:05.623 106659 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cf0577af-a5ed-496f-aa24-ae4d86898e85#033[00m
Dec  1 09:45:05 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:45:05.638 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[1fa786c5-5e33-446a-aa42-7db981a847ac]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:45:05 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:45:05.639 106659 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapcf0577af-a1 in ovnmeta-cf0577af-a5ed-496f-aa24-ae4d86898e85 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  1 09:45:05 compute-0 systemd-udevd[253917]: Network interface NamePolicy= disabled on kernel command line.
Dec  1 09:45:05 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:45:05.646 239818 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapcf0577af-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  1 09:45:05 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:45:05.646 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[9736416f-6f10-4ef3-b47a-d18d86a4263f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:45:05 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:45:05.648 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[c8c34d92-a483-4117-9091-7ff283da9eee]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:45:05 compute-0 NetworkManager[56318]: <info>  [1764582305.6662] device (tape1536dee-e9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  1 09:45:05 compute-0 NetworkManager[56318]: <info>  [1764582305.6671] device (tape1536dee-e9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  1 09:45:05 compute-0 systemd-machined[155812]: New machine qemu-12-instance-0000000b.
Dec  1 09:45:05 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:45:05.660 106797 DEBUG oslo.privsep.daemon [-] privsep: reply[20d20a2f-5b6e-4050-ab80-fd268d81b76a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:45:05 compute-0 nova_compute[189491]: 2025-12-01 09:45:05.679 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:45:05 compute-0 ovn_controller[97794]: 2025-12-01T09:45:05Z|00120|binding|INFO|Setting lport e1536dee-e9fa-499f-9e7a-2b2a0ecce586 ovn-installed in OVS
Dec  1 09:45:05 compute-0 ovn_controller[97794]: 2025-12-01T09:45:05Z|00121|binding|INFO|Setting lport e1536dee-e9fa-499f-9e7a-2b2a0ecce586 up in Southbound
Dec  1 09:45:05 compute-0 nova_compute[189491]: 2025-12-01 09:45:05.682 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:45:05 compute-0 systemd[1]: Started Virtual Machine qemu-12-instance-0000000b.
Dec  1 09:45:05 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:45:05.700 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[7bfdae6e-e5e4-4402-a03c-a5c423789cf7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:45:05 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:45:05.738 239843 DEBUG oslo.privsep.daemon [-] privsep: reply[5df3af6f-0a8c-4a4b-986c-f56b21ac7050]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:45:05 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:45:05.749 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[6fe7a2fa-b588-425e-b80a-65e02cced8c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:45:05 compute-0 NetworkManager[56318]: <info>  [1764582305.7514] manager: (tapcf0577af-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/61)
Dec  1 09:45:05 compute-0 systemd-udevd[253921]: Network interface NamePolicy= disabled on kernel command line.
Dec  1 09:45:05 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:45:05.798 239843 DEBUG oslo.privsep.daemon [-] privsep: reply[9797504a-29b2-461e-8229-936d8d1d9b2d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:45:05 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:45:05.802 239843 DEBUG oslo.privsep.daemon [-] privsep: reply[79805378-3e7c-45d2-86e8-d3e11636b1cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:45:05 compute-0 NetworkManager[56318]: <info>  [1764582305.8280] device (tapcf0577af-a0): carrier: link connected
Dec  1 09:45:05 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:45:05.836 239843 DEBUG oslo.privsep.daemon [-] privsep: reply[f24040bd-041f-4d60-9eea-62ab8f56929b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:45:05 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:45:05.854 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[67f2f13d-9a80-4e36-a670-158af32be8c5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcf0577af-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2f:ac:52'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 37], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 554552, 'reachable_time': 44152, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 253952, 'error': None, 'target': 'ovnmeta-cf0577af-a5ed-496f-aa24-ae4d86898e85', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:45:05 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:45:05.880 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[622c83a7-a1ed-4394-a875-d4c8fc58c624]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe2f:ac52'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 554552, 'tstamp': 554552}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 253953, 'error': None, 'target': 'ovnmeta-cf0577af-a5ed-496f-aa24-ae4d86898e85', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:45:05 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:45:05.903 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[e32a1035-6661-4885-8cc3-2e92f53fe4c7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcf0577af-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2f:ac:52'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 37], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 554552, 'reachable_time': 44152, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 253954, 'error': None, 'target': 'ovnmeta-cf0577af-a5ed-496f-aa24-ae4d86898e85', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:45:05 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:45:05.942 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[26198bd5-f15b-4558-be2d-f63751532206]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:45:06 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:45:06.028 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[373817b9-f9b9-4cf3-8544-5499a95ef4f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:45:06 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:45:06.030 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcf0577af-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:45:06 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:45:06.030 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 09:45:06 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:45:06.030 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcf0577af-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:45:06 compute-0 kernel: tapcf0577af-a0: entered promiscuous mode
Dec  1 09:45:06 compute-0 nova_compute[189491]: 2025-12-01 09:45:06.034 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:45:06 compute-0 NetworkManager[56318]: <info>  [1764582306.0386] manager: (tapcf0577af-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/62)
Dec  1 09:45:06 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:45:06.042 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcf0577af-a0, col_values=(('external_ids', {'iface-id': '7159c06b-520e-4157-9235-0b4ddbac66cf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:45:06 compute-0 nova_compute[189491]: 2025-12-01 09:45:06.044 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:45:06 compute-0 ovn_controller[97794]: 2025-12-01T09:45:06Z|00122|binding|INFO|Releasing lport 7159c06b-520e-4157-9235-0b4ddbac66cf from this chassis (sb_readonly=0)
Dec  1 09:45:06 compute-0 nova_compute[189491]: 2025-12-01 09:45:06.045 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:45:06 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:45:06.046 106659 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/cf0577af-a5ed-496f-aa24-ae4d86898e85.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/cf0577af-a5ed-496f-aa24-ae4d86898e85.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  1 09:45:06 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:45:06.047 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[75271d8c-eab2-447c-87f6-b115966dc348]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:45:06 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:45:06.048 106659 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  1 09:45:06 compute-0 ovn_metadata_agent[106654]: global
Dec  1 09:45:06 compute-0 ovn_metadata_agent[106654]:    log         /dev/log local0 debug
Dec  1 09:45:06 compute-0 ovn_metadata_agent[106654]:    log-tag     haproxy-metadata-proxy-cf0577af-a5ed-496f-aa24-ae4d86898e85
Dec  1 09:45:06 compute-0 ovn_metadata_agent[106654]:    user        root
Dec  1 09:45:06 compute-0 ovn_metadata_agent[106654]:    group       root
Dec  1 09:45:06 compute-0 ovn_metadata_agent[106654]:    maxconn     1024
Dec  1 09:45:06 compute-0 ovn_metadata_agent[106654]:    pidfile     /var/lib/neutron/external/pids/cf0577af-a5ed-496f-aa24-ae4d86898e85.pid.haproxy
Dec  1 09:45:06 compute-0 ovn_metadata_agent[106654]:    daemon
Dec  1 09:45:06 compute-0 ovn_metadata_agent[106654]: 
Dec  1 09:45:06 compute-0 ovn_metadata_agent[106654]: defaults
Dec  1 09:45:06 compute-0 ovn_metadata_agent[106654]:    log global
Dec  1 09:45:06 compute-0 ovn_metadata_agent[106654]:    mode http
Dec  1 09:45:06 compute-0 ovn_metadata_agent[106654]:    option httplog
Dec  1 09:45:06 compute-0 ovn_metadata_agent[106654]:    option dontlognull
Dec  1 09:45:06 compute-0 ovn_metadata_agent[106654]:    option http-server-close
Dec  1 09:45:06 compute-0 ovn_metadata_agent[106654]:    option forwardfor
Dec  1 09:45:06 compute-0 ovn_metadata_agent[106654]:    retries                 3
Dec  1 09:45:06 compute-0 ovn_metadata_agent[106654]:    timeout http-request    30s
Dec  1 09:45:06 compute-0 ovn_metadata_agent[106654]:    timeout connect         30s
Dec  1 09:45:06 compute-0 ovn_metadata_agent[106654]:    timeout client          32s
Dec  1 09:45:06 compute-0 ovn_metadata_agent[106654]:    timeout server          32s
Dec  1 09:45:06 compute-0 ovn_metadata_agent[106654]:    timeout http-keep-alive 30s
Dec  1 09:45:06 compute-0 ovn_metadata_agent[106654]: 
Dec  1 09:45:06 compute-0 ovn_metadata_agent[106654]: 
Dec  1 09:45:06 compute-0 ovn_metadata_agent[106654]: listen listener
Dec  1 09:45:06 compute-0 ovn_metadata_agent[106654]:    bind 169.254.169.254:80
Dec  1 09:45:06 compute-0 ovn_metadata_agent[106654]:    server metadata /var/lib/neutron/metadata_proxy
Dec  1 09:45:06 compute-0 ovn_metadata_agent[106654]:    http-request add-header X-OVN-Network-ID cf0577af-a5ed-496f-aa24-ae4d86898e85
Dec  1 09:45:06 compute-0 ovn_metadata_agent[106654]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  1 09:45:06 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:45:06.048 106659 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-cf0577af-a5ed-496f-aa24-ae4d86898e85', 'env', 'PROCESS_TAG=haproxy-cf0577af-a5ed-496f-aa24-ae4d86898e85', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/cf0577af-a5ed-496f-aa24-ae4d86898e85.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  1 09:45:06 compute-0 nova_compute[189491]: 2025-12-01 09:45:06.057 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:45:06 compute-0 nova_compute[189491]: 2025-12-01 09:45:06.283 189495 DEBUG nova.virt.driver [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] Emitting event <LifecycleEvent: 1764582306.2828984, dc0d510c-4baf-4bcb-ab4f-de6ee48849c0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 09:45:06 compute-0 nova_compute[189491]: 2025-12-01 09:45:06.283 189495 INFO nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: dc0d510c-4baf-4bcb-ab4f-de6ee48849c0] VM Started (Lifecycle Event)#033[00m
Dec  1 09:45:06 compute-0 nova_compute[189491]: 2025-12-01 09:45:06.389 189495 DEBUG nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: dc0d510c-4baf-4bcb-ab4f-de6ee48849c0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 09:45:06 compute-0 nova_compute[189491]: 2025-12-01 09:45:06.397 189495 DEBUG nova.virt.driver [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] Emitting event <LifecycleEvent: 1764582306.2831001, dc0d510c-4baf-4bcb-ab4f-de6ee48849c0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 09:45:06 compute-0 nova_compute[189491]: 2025-12-01 09:45:06.397 189495 INFO nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: dc0d510c-4baf-4bcb-ab4f-de6ee48849c0] VM Paused (Lifecycle Event)#033[00m
Dec  1 09:45:06 compute-0 nova_compute[189491]: 2025-12-01 09:45:06.418 189495 DEBUG nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: dc0d510c-4baf-4bcb-ab4f-de6ee48849c0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 09:45:06 compute-0 nova_compute[189491]: 2025-12-01 09:45:06.424 189495 DEBUG nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: dc0d510c-4baf-4bcb-ab4f-de6ee48849c0] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  1 09:45:06 compute-0 nova_compute[189491]: 2025-12-01 09:45:06.492 189495 INFO nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: dc0d510c-4baf-4bcb-ab4f-de6ee48849c0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  1 09:45:06 compute-0 podman[253995]: 2025-12-01 09:45:06.536226098 +0000 UTC m=+0.079644016 container create 11aba77243e759c2d6c3e70732cd39540275449415fce36de1fa54533f0f4be1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cf0577af-a5ed-496f-aa24-ae4d86898e85, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  1 09:45:06 compute-0 systemd[1]: Started libpod-conmon-11aba77243e759c2d6c3e70732cd39540275449415fce36de1fa54533f0f4be1.scope.
Dec  1 09:45:06 compute-0 podman[253995]: 2025-12-01 09:45:06.498399215 +0000 UTC m=+0.041817153 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  1 09:45:06 compute-0 nova_compute[189491]: 2025-12-01 09:45:06.599 189495 DEBUG nova.network.neutron [req-b5100c73-71c9-4315-b93c-6642fd5cafa6 req-a1ed9997-4718-4ee1-8561-daccadf10473 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: dc0d510c-4baf-4bcb-ab4f-de6ee48849c0] Updated VIF entry in instance network info cache for port e1536dee-e9fa-499f-9e7a-2b2a0ecce586. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  1 09:45:06 compute-0 nova_compute[189491]: 2025-12-01 09:45:06.600 189495 DEBUG nova.network.neutron [req-b5100c73-71c9-4315-b93c-6642fd5cafa6 req-a1ed9997-4718-4ee1-8561-daccadf10473 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: dc0d510c-4baf-4bcb-ab4f-de6ee48849c0] Updating instance_info_cache with network_info: [{"id": "e1536dee-e9fa-499f-9e7a-2b2a0ecce586", "address": "fa:16:3e:50:a8:e2", "network": {"id": "cf0577af-a5ed-496f-aa24-ae4d86898e85", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.156", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d5294cc5ac64b22a4a0f770b8d8bc61", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1536dee-e9", "ovs_interfaceid": "e1536dee-e9fa-499f-9e7a-2b2a0ecce586", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 09:45:06 compute-0 systemd[1]: Started libcrun container.
Dec  1 09:45:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a3150d317095ad740158ae4fa495bd79bb1451eaef937eaa43dfcd85db07375/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  1 09:45:06 compute-0 nova_compute[189491]: 2025-12-01 09:45:06.618 189495 DEBUG oslo_concurrency.lockutils [req-b5100c73-71c9-4315-b93c-6642fd5cafa6 req-a1ed9997-4718-4ee1-8561-daccadf10473 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Releasing lock "refresh_cache-dc0d510c-4baf-4bcb-ab4f-de6ee48849c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 09:45:06 compute-0 podman[253995]: 2025-12-01 09:45:06.63542856 +0000 UTC m=+0.178846498 container init 11aba77243e759c2d6c3e70732cd39540275449415fce36de1fa54533f0f4be1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cf0577af-a5ed-496f-aa24-ae4d86898e85, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125)
Dec  1 09:45:06 compute-0 podman[253995]: 2025-12-01 09:45:06.643058697 +0000 UTC m=+0.186476605 container start 11aba77243e759c2d6c3e70732cd39540275449415fce36de1fa54533f0f4be1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cf0577af-a5ed-496f-aa24-ae4d86898e85, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true)
Dec  1 09:45:06 compute-0 neutron-haproxy-ovnmeta-cf0577af-a5ed-496f-aa24-ae4d86898e85[254007]: [NOTICE]   (254011) : New worker (254013) forked
Dec  1 09:45:06 compute-0 neutron-haproxy-ovnmeta-cf0577af-a5ed-496f-aa24-ae4d86898e85[254007]: [NOTICE]   (254011) : Loading success.
Dec  1 09:45:07 compute-0 nova_compute[189491]: 2025-12-01 09:45:07.100 189495 DEBUG nova.compute.manager [req-f46119b3-19dd-45e7-af9a-8655584d355f req-f6456966-67cf-44fb-a573-041c6d6997de ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: dc0d510c-4baf-4bcb-ab4f-de6ee48849c0] Received event network-vif-plugged-e1536dee-e9fa-499f-9e7a-2b2a0ecce586 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 09:45:07 compute-0 nova_compute[189491]: 2025-12-01 09:45:07.101 189495 DEBUG oslo_concurrency.lockutils [req-f46119b3-19dd-45e7-af9a-8655584d355f req-f6456966-67cf-44fb-a573-041c6d6997de ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Acquiring lock "dc0d510c-4baf-4bcb-ab4f-de6ee48849c0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:45:07 compute-0 nova_compute[189491]: 2025-12-01 09:45:07.101 189495 DEBUG oslo_concurrency.lockutils [req-f46119b3-19dd-45e7-af9a-8655584d355f req-f6456966-67cf-44fb-a573-041c6d6997de ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Lock "dc0d510c-4baf-4bcb-ab4f-de6ee48849c0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:45:07 compute-0 nova_compute[189491]: 2025-12-01 09:45:07.102 189495 DEBUG oslo_concurrency.lockutils [req-f46119b3-19dd-45e7-af9a-8655584d355f req-f6456966-67cf-44fb-a573-041c6d6997de ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Lock "dc0d510c-4baf-4bcb-ab4f-de6ee48849c0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:45:07 compute-0 nova_compute[189491]: 2025-12-01 09:45:07.102 189495 DEBUG nova.compute.manager [req-f46119b3-19dd-45e7-af9a-8655584d355f req-f6456966-67cf-44fb-a573-041c6d6997de ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: dc0d510c-4baf-4bcb-ab4f-de6ee48849c0] Processing event network-vif-plugged-e1536dee-e9fa-499f-9e7a-2b2a0ecce586 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  1 09:45:07 compute-0 nova_compute[189491]: 2025-12-01 09:45:07.102 189495 DEBUG nova.compute.manager [req-f46119b3-19dd-45e7-af9a-8655584d355f req-f6456966-67cf-44fb-a573-041c6d6997de ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: dc0d510c-4baf-4bcb-ab4f-de6ee48849c0] Received event network-vif-plugged-e1536dee-e9fa-499f-9e7a-2b2a0ecce586 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 09:45:07 compute-0 nova_compute[189491]: 2025-12-01 09:45:07.102 189495 DEBUG oslo_concurrency.lockutils [req-f46119b3-19dd-45e7-af9a-8655584d355f req-f6456966-67cf-44fb-a573-041c6d6997de ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Acquiring lock "dc0d510c-4baf-4bcb-ab4f-de6ee48849c0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:45:07 compute-0 nova_compute[189491]: 2025-12-01 09:45:07.102 189495 DEBUG oslo_concurrency.lockutils [req-f46119b3-19dd-45e7-af9a-8655584d355f req-f6456966-67cf-44fb-a573-041c6d6997de ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Lock "dc0d510c-4baf-4bcb-ab4f-de6ee48849c0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:45:07 compute-0 nova_compute[189491]: 2025-12-01 09:45:07.102 189495 DEBUG oslo_concurrency.lockutils [req-f46119b3-19dd-45e7-af9a-8655584d355f req-f6456966-67cf-44fb-a573-041c6d6997de ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Lock "dc0d510c-4baf-4bcb-ab4f-de6ee48849c0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:45:07 compute-0 nova_compute[189491]: 2025-12-01 09:45:07.103 189495 DEBUG nova.compute.manager [req-f46119b3-19dd-45e7-af9a-8655584d355f req-f6456966-67cf-44fb-a573-041c6d6997de ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: dc0d510c-4baf-4bcb-ab4f-de6ee48849c0] No waiting events found dispatching network-vif-plugged-e1536dee-e9fa-499f-9e7a-2b2a0ecce586 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 09:45:07 compute-0 nova_compute[189491]: 2025-12-01 09:45:07.103 189495 WARNING nova.compute.manager [req-f46119b3-19dd-45e7-af9a-8655584d355f req-f6456966-67cf-44fb-a573-041c6d6997de ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: dc0d510c-4baf-4bcb-ab4f-de6ee48849c0] Received unexpected event network-vif-plugged-e1536dee-e9fa-499f-9e7a-2b2a0ecce586 for instance with vm_state building and task_state spawning.#033[00m
Dec  1 09:45:07 compute-0 nova_compute[189491]: 2025-12-01 09:45:07.104 189495 DEBUG nova.compute.manager [None req-2f487f36-1618-4066-8c19-ad1b32e4c962 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] [instance: dc0d510c-4baf-4bcb-ab4f-de6ee48849c0] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  1 09:45:07 compute-0 nova_compute[189491]: 2025-12-01 09:45:07.112 189495 DEBUG nova.virt.libvirt.driver [None req-2f487f36-1618-4066-8c19-ad1b32e4c962 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] [instance: dc0d510c-4baf-4bcb-ab4f-de6ee48849c0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  1 09:45:07 compute-0 nova_compute[189491]: 2025-12-01 09:45:07.113 189495 DEBUG nova.virt.driver [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] Emitting event <LifecycleEvent: 1764582307.1111405, dc0d510c-4baf-4bcb-ab4f-de6ee48849c0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 09:45:07 compute-0 nova_compute[189491]: 2025-12-01 09:45:07.113 189495 INFO nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: dc0d510c-4baf-4bcb-ab4f-de6ee48849c0] VM Resumed (Lifecycle Event)#033[00m
Dec  1 09:45:07 compute-0 nova_compute[189491]: 2025-12-01 09:45:07.122 189495 INFO nova.virt.libvirt.driver [-] [instance: dc0d510c-4baf-4bcb-ab4f-de6ee48849c0] Instance spawned successfully.#033[00m
Dec  1 09:45:07 compute-0 nova_compute[189491]: 2025-12-01 09:45:07.123 189495 DEBUG nova.virt.libvirt.driver [None req-2f487f36-1618-4066-8c19-ad1b32e4c962 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] [instance: dc0d510c-4baf-4bcb-ab4f-de6ee48849c0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  1 09:45:07 compute-0 nova_compute[189491]: 2025-12-01 09:45:07.139 189495 DEBUG nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: dc0d510c-4baf-4bcb-ab4f-de6ee48849c0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 09:45:07 compute-0 nova_compute[189491]: 2025-12-01 09:45:07.149 189495 DEBUG nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: dc0d510c-4baf-4bcb-ab4f-de6ee48849c0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  1 09:45:07 compute-0 nova_compute[189491]: 2025-12-01 09:45:07.158 189495 DEBUG nova.virt.libvirt.driver [None req-2f487f36-1618-4066-8c19-ad1b32e4c962 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] [instance: dc0d510c-4baf-4bcb-ab4f-de6ee48849c0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 09:45:07 compute-0 nova_compute[189491]: 2025-12-01 09:45:07.159 189495 DEBUG nova.virt.libvirt.driver [None req-2f487f36-1618-4066-8c19-ad1b32e4c962 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] [instance: dc0d510c-4baf-4bcb-ab4f-de6ee48849c0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 09:45:07 compute-0 nova_compute[189491]: 2025-12-01 09:45:07.159 189495 DEBUG nova.virt.libvirt.driver [None req-2f487f36-1618-4066-8c19-ad1b32e4c962 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] [instance: dc0d510c-4baf-4bcb-ab4f-de6ee48849c0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 09:45:07 compute-0 nova_compute[189491]: 2025-12-01 09:45:07.159 189495 DEBUG nova.virt.libvirt.driver [None req-2f487f36-1618-4066-8c19-ad1b32e4c962 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] [instance: dc0d510c-4baf-4bcb-ab4f-de6ee48849c0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 09:45:07 compute-0 nova_compute[189491]: 2025-12-01 09:45:07.160 189495 DEBUG nova.virt.libvirt.driver [None req-2f487f36-1618-4066-8c19-ad1b32e4c962 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] [instance: dc0d510c-4baf-4bcb-ab4f-de6ee48849c0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 09:45:07 compute-0 nova_compute[189491]: 2025-12-01 09:45:07.160 189495 DEBUG nova.virt.libvirt.driver [None req-2f487f36-1618-4066-8c19-ad1b32e4c962 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] [instance: dc0d510c-4baf-4bcb-ab4f-de6ee48849c0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 09:45:07 compute-0 nova_compute[189491]: 2025-12-01 09:45:07.185 189495 INFO nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: dc0d510c-4baf-4bcb-ab4f-de6ee48849c0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  1 09:45:07 compute-0 nova_compute[189491]: 2025-12-01 09:45:07.305 189495 INFO nova.compute.manager [None req-2f487f36-1618-4066-8c19-ad1b32e4c962 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] [instance: dc0d510c-4baf-4bcb-ab4f-de6ee48849c0] Took 6.36 seconds to spawn the instance on the hypervisor.#033[00m
Dec  1 09:45:07 compute-0 nova_compute[189491]: 2025-12-01 09:45:07.306 189495 DEBUG nova.compute.manager [None req-2f487f36-1618-4066-8c19-ad1b32e4c962 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] [instance: dc0d510c-4baf-4bcb-ab4f-de6ee48849c0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 09:45:07 compute-0 nova_compute[189491]: 2025-12-01 09:45:07.617 189495 INFO nova.compute.manager [None req-2f487f36-1618-4066-8c19-ad1b32e4c962 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] [instance: dc0d510c-4baf-4bcb-ab4f-de6ee48849c0] Took 7.19 seconds to build instance.#033[00m
Dec  1 09:45:07 compute-0 nova_compute[189491]: 2025-12-01 09:45:07.643 189495 DEBUG oslo_concurrency.lockutils [None req-2f487f36-1618-4066-8c19-ad1b32e4c962 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Lock "dc0d510c-4baf-4bcb-ab4f-de6ee48849c0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.370s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:45:07 compute-0 ovn_controller[97794]: 2025-12-01T09:45:07Z|00018|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:81:32:12 10.100.0.14
Dec  1 09:45:08 compute-0 nova_compute[189491]: 2025-12-01 09:45:08.160 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:45:09 compute-0 nova_compute[189491]: 2025-12-01 09:45:09.353 189495 INFO nova.compute.manager [None req-c5222b67-079c-415c-a314-adf1ad5d8514 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] [instance: 70f48496-14bd-4e6f-8706-262d8e6b9510] Get console output#033[00m
Dec  1 09:45:09 compute-0 nova_compute[189491]: 2025-12-01 09:45:09.448 239700 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Dec  1 09:45:09 compute-0 podman[254023]: 2025-12-01 09:45:09.708202294 +0000 UTC m=+0.074724255 container health_status f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Dec  1 09:45:09 compute-0 podman[254022]: 2025-12-01 09:45:09.75430865 +0000 UTC m=+0.114743433 container health_status 110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., io.openshift.expose-services=, managed_by=edpm_ansible, architecture=x86_64, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, distribution-scope=public, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, version=9.6, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, build-date=2025-08-20T13:12:41)
Dec  1 09:45:09 compute-0 nova_compute[189491]: 2025-12-01 09:45:09.856 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:45:11 compute-0 nova_compute[189491]: 2025-12-01 09:45:11.448 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:45:11 compute-0 nova_compute[189491]: 2025-12-01 09:45:11.686 189495 DEBUG nova.compute.manager [req-9ebc9902-856b-4863-85a6-1585fec19c3a req-35603ac2-2921-4727-b298-4e4e7f62e20a ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 70f48496-14bd-4e6f-8706-262d8e6b9510] Received event network-changed-9ba63f14-2eaa-45bf-8c16-59bd3a7893de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 09:45:11 compute-0 nova_compute[189491]: 2025-12-01 09:45:11.687 189495 DEBUG nova.compute.manager [req-9ebc9902-856b-4863-85a6-1585fec19c3a req-35603ac2-2921-4727-b298-4e4e7f62e20a ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 70f48496-14bd-4e6f-8706-262d8e6b9510] Refreshing instance network info cache due to event network-changed-9ba63f14-2eaa-45bf-8c16-59bd3a7893de. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  1 09:45:11 compute-0 nova_compute[189491]: 2025-12-01 09:45:11.687 189495 DEBUG oslo_concurrency.lockutils [req-9ebc9902-856b-4863-85a6-1585fec19c3a req-35603ac2-2921-4727-b298-4e4e7f62e20a ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Acquiring lock "refresh_cache-70f48496-14bd-4e6f-8706-262d8e6b9510" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 09:45:11 compute-0 nova_compute[189491]: 2025-12-01 09:45:11.687 189495 DEBUG oslo_concurrency.lockutils [req-9ebc9902-856b-4863-85a6-1585fec19c3a req-35603ac2-2921-4727-b298-4e4e7f62e20a ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Acquired lock "refresh_cache-70f48496-14bd-4e6f-8706-262d8e6b9510" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 09:45:11 compute-0 nova_compute[189491]: 2025-12-01 09:45:11.688 189495 DEBUG nova.network.neutron [req-9ebc9902-856b-4863-85a6-1585fec19c3a req-35603ac2-2921-4727-b298-4e4e7f62e20a ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 70f48496-14bd-4e6f-8706-262d8e6b9510] Refreshing network info cache for port 9ba63f14-2eaa-45bf-8c16-59bd3a7893de _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  1 09:45:12 compute-0 ovn_controller[97794]: 2025-12-01T09:45:12Z|00123|binding|INFO|Releasing lport a52d5841-c07f-4d57-abbb-5b84c6008243 from this chassis (sb_readonly=0)
Dec  1 09:45:12 compute-0 ovn_controller[97794]: 2025-12-01T09:45:12Z|00124|binding|INFO|Releasing lport 8e3cbcf0-fa9b-4b7e-8d20-6f493c3e3d90 from this chassis (sb_readonly=0)
Dec  1 09:45:12 compute-0 ovn_controller[97794]: 2025-12-01T09:45:12Z|00125|binding|INFO|Releasing lport 7159c06b-520e-4157-9235-0b4ddbac66cf from this chassis (sb_readonly=0)
Dec  1 09:45:12 compute-0 nova_compute[189491]: 2025-12-01 09:45:12.675 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:45:13 compute-0 nova_compute[189491]: 2025-12-01 09:45:13.164 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:45:13 compute-0 podman[254059]: 2025-12-01 09:45:13.731095738 +0000 UTC m=+0.104424900 container health_status 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd)
Dec  1 09:45:13 compute-0 podman[254060]: 2025-12-01 09:45:13.806536891 +0000 UTC m=+0.163090533 container health_status 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  1 09:45:14 compute-0 nova_compute[189491]: 2025-12-01 09:45:14.222 189495 DEBUG nova.network.neutron [req-9ebc9902-856b-4863-85a6-1585fec19c3a req-35603ac2-2921-4727-b298-4e4e7f62e20a ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 70f48496-14bd-4e6f-8706-262d8e6b9510] Updated VIF entry in instance network info cache for port 9ba63f14-2eaa-45bf-8c16-59bd3a7893de. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  1 09:45:14 compute-0 nova_compute[189491]: 2025-12-01 09:45:14.223 189495 DEBUG nova.network.neutron [req-9ebc9902-856b-4863-85a6-1585fec19c3a req-35603ac2-2921-4727-b298-4e4e7f62e20a ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 70f48496-14bd-4e6f-8706-262d8e6b9510] Updating instance_info_cache with network_info: [{"id": "9ba63f14-2eaa-45bf-8c16-59bd3a7893de", "address": "fa:16:3e:06:a3:58", "network": {"id": "4f3e9b63-cba6-412e-ba07-d66a8b38af02", "bridge": "br-int", "label": "tempest-network-smoke--1085714181", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee60ff0d117e468aa42c7d39022568ea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ba63f14-2e", "ovs_interfaceid": "9ba63f14-2eaa-45bf-8c16-59bd3a7893de", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 09:45:14 compute-0 nova_compute[189491]: 2025-12-01 09:45:14.245 189495 DEBUG oslo_concurrency.lockutils [req-9ebc9902-856b-4863-85a6-1585fec19c3a req-35603ac2-2921-4727-b298-4e4e7f62e20a ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Releasing lock "refresh_cache-70f48496-14bd-4e6f-8706-262d8e6b9510" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 09:45:14 compute-0 nova_compute[189491]: 2025-12-01 09:45:14.859 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:45:17 compute-0 nova_compute[189491]: 2025-12-01 09:45:17.130 189495 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764582301.894458, 332bb5cd-96b4-43a8-9d53-1d889d5e2df8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 09:45:17 compute-0 nova_compute[189491]: 2025-12-01 09:45:17.130 189495 INFO nova.compute.manager [-] [instance: 332bb5cd-96b4-43a8-9d53-1d889d5e2df8] VM Stopped (Lifecycle Event)#033[00m
Dec  1 09:45:17 compute-0 nova_compute[189491]: 2025-12-01 09:45:17.191 189495 DEBUG nova.compute.manager [None req-c78aff8f-708e-495d-a280-b72158e52b3c - - - - - -] [instance: 332bb5cd-96b4-43a8-9d53-1d889d5e2df8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 09:45:18 compute-0 nova_compute[189491]: 2025-12-01 09:45:18.166 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:45:19 compute-0 nova_compute[189491]: 2025-12-01 09:45:19.863 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:45:20 compute-0 nova_compute[189491]: 2025-12-01 09:45:20.842 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:45:23 compute-0 nova_compute[189491]: 2025-12-01 09:45:23.171 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:45:23 compute-0 nova_compute[189491]: 2025-12-01 09:45:23.239 189495 DEBUG oslo_concurrency.lockutils [None req-2440282d-4f1c-4f5e-8016-694d753518f1 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Acquiring lock "7535b6dd-3ef8-4847-812d-f0a9208df287" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:45:23 compute-0 nova_compute[189491]: 2025-12-01 09:45:23.240 189495 DEBUG oslo_concurrency.lockutils [None req-2440282d-4f1c-4f5e-8016-694d753518f1 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Lock "7535b6dd-3ef8-4847-812d-f0a9208df287" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:45:23 compute-0 nova_compute[189491]: 2025-12-01 09:45:23.261 189495 DEBUG nova.compute.manager [None req-2440282d-4f1c-4f5e-8016-694d753518f1 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] [instance: 7535b6dd-3ef8-4847-812d-f0a9208df287] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  1 09:45:23 compute-0 nova_compute[189491]: 2025-12-01 09:45:23.355 189495 DEBUG oslo_concurrency.lockutils [None req-2440282d-4f1c-4f5e-8016-694d753518f1 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:45:23 compute-0 nova_compute[189491]: 2025-12-01 09:45:23.356 189495 DEBUG oslo_concurrency.lockutils [None req-2440282d-4f1c-4f5e-8016-694d753518f1 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:45:23 compute-0 nova_compute[189491]: 2025-12-01 09:45:23.367 189495 DEBUG nova.virt.hardware [None req-2440282d-4f1c-4f5e-8016-694d753518f1 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  1 09:45:23 compute-0 nova_compute[189491]: 2025-12-01 09:45:23.368 189495 INFO nova.compute.claims [None req-2440282d-4f1c-4f5e-8016-694d753518f1 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] [instance: 7535b6dd-3ef8-4847-812d-f0a9208df287] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  1 09:45:23 compute-0 nova_compute[189491]: 2025-12-01 09:45:23.533 189495 DEBUG nova.compute.provider_tree [None req-2440282d-4f1c-4f5e-8016-694d753518f1 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Inventory has not changed in ProviderTree for provider: 143c7fe7-af1f-477a-978c-6a994d785d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 09:45:23 compute-0 nova_compute[189491]: 2025-12-01 09:45:23.548 189495 DEBUG nova.scheduler.client.report [None req-2440282d-4f1c-4f5e-8016-694d753518f1 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Inventory has not changed for provider 143c7fe7-af1f-477a-978c-6a994d785d98 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 09:45:23 compute-0 nova_compute[189491]: 2025-12-01 09:45:23.572 189495 DEBUG oslo_concurrency.lockutils [None req-2440282d-4f1c-4f5e-8016-694d753518f1 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.216s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:45:23 compute-0 nova_compute[189491]: 2025-12-01 09:45:23.573 189495 DEBUG nova.compute.manager [None req-2440282d-4f1c-4f5e-8016-694d753518f1 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] [instance: 7535b6dd-3ef8-4847-812d-f0a9208df287] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  1 09:45:23 compute-0 nova_compute[189491]: 2025-12-01 09:45:23.629 189495 DEBUG nova.compute.manager [None req-2440282d-4f1c-4f5e-8016-694d753518f1 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] [instance: 7535b6dd-3ef8-4847-812d-f0a9208df287] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  1 09:45:23 compute-0 nova_compute[189491]: 2025-12-01 09:45:23.631 189495 DEBUG nova.network.neutron [None req-2440282d-4f1c-4f5e-8016-694d753518f1 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] [instance: 7535b6dd-3ef8-4847-812d-f0a9208df287] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  1 09:45:23 compute-0 nova_compute[189491]: 2025-12-01 09:45:23.652 189495 INFO nova.virt.libvirt.driver [None req-2440282d-4f1c-4f5e-8016-694d753518f1 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] [instance: 7535b6dd-3ef8-4847-812d-f0a9208df287] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  1 09:45:23 compute-0 nova_compute[189491]: 2025-12-01 09:45:23.692 189495 DEBUG nova.compute.manager [None req-2440282d-4f1c-4f5e-8016-694d753518f1 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] [instance: 7535b6dd-3ef8-4847-812d-f0a9208df287] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  1 09:45:23 compute-0 podman[254107]: 2025-12-01 09:45:23.714090452 +0000 UTC m=+0.077686537 container health_status 6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  1 09:45:23 compute-0 podman[254108]: 2025-12-01 09:45:23.739642997 +0000 UTC m=+0.097111033 container health_status ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec  1 09:45:23 compute-0 nova_compute[189491]: 2025-12-01 09:45:23.796 189495 DEBUG nova.compute.manager [None req-2440282d-4f1c-4f5e-8016-694d753518f1 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] [instance: 7535b6dd-3ef8-4847-812d-f0a9208df287] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  1 09:45:23 compute-0 nova_compute[189491]: 2025-12-01 09:45:23.798 189495 DEBUG nova.virt.libvirt.driver [None req-2440282d-4f1c-4f5e-8016-694d753518f1 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] [instance: 7535b6dd-3ef8-4847-812d-f0a9208df287] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  1 09:45:23 compute-0 nova_compute[189491]: 2025-12-01 09:45:23.799 189495 INFO nova.virt.libvirt.driver [None req-2440282d-4f1c-4f5e-8016-694d753518f1 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] [instance: 7535b6dd-3ef8-4847-812d-f0a9208df287] Creating image(s)#033[00m
Dec  1 09:45:23 compute-0 nova_compute[189491]: 2025-12-01 09:45:23.799 189495 DEBUG oslo_concurrency.lockutils [None req-2440282d-4f1c-4f5e-8016-694d753518f1 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Acquiring lock "/var/lib/nova/instances/7535b6dd-3ef8-4847-812d-f0a9208df287/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:45:23 compute-0 nova_compute[189491]: 2025-12-01 09:45:23.800 189495 DEBUG oslo_concurrency.lockutils [None req-2440282d-4f1c-4f5e-8016-694d753518f1 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Lock "/var/lib/nova/instances/7535b6dd-3ef8-4847-812d-f0a9208df287/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:45:23 compute-0 nova_compute[189491]: 2025-12-01 09:45:23.801 189495 DEBUG oslo_concurrency.lockutils [None req-2440282d-4f1c-4f5e-8016-694d753518f1 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Lock "/var/lib/nova/instances/7535b6dd-3ef8-4847-812d-f0a9208df287/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:45:23 compute-0 nova_compute[189491]: 2025-12-01 09:45:23.815 189495 DEBUG oslo_concurrency.processutils [None req-2440282d-4f1c-4f5e-8016-694d753518f1 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bfffb0fe9ffb8885e11e7a8e92aeafe5ed4e87fd --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:45:23 compute-0 nova_compute[189491]: 2025-12-01 09:45:23.871 189495 DEBUG oslo_concurrency.processutils [None req-2440282d-4f1c-4f5e-8016-694d753518f1 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bfffb0fe9ffb8885e11e7a8e92aeafe5ed4e87fd --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:45:23 compute-0 nova_compute[189491]: 2025-12-01 09:45:23.872 189495 DEBUG oslo_concurrency.lockutils [None req-2440282d-4f1c-4f5e-8016-694d753518f1 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Acquiring lock "bfffb0fe9ffb8885e11e7a8e92aeafe5ed4e87fd" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:45:23 compute-0 nova_compute[189491]: 2025-12-01 09:45:23.873 189495 DEBUG oslo_concurrency.lockutils [None req-2440282d-4f1c-4f5e-8016-694d753518f1 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Lock "bfffb0fe9ffb8885e11e7a8e92aeafe5ed4e87fd" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:45:23 compute-0 nova_compute[189491]: 2025-12-01 09:45:23.885 189495 DEBUG oslo_concurrency.processutils [None req-2440282d-4f1c-4f5e-8016-694d753518f1 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bfffb0fe9ffb8885e11e7a8e92aeafe5ed4e87fd --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:45:23 compute-0 nova_compute[189491]: 2025-12-01 09:45:23.946 189495 DEBUG oslo_concurrency.processutils [None req-2440282d-4f1c-4f5e-8016-694d753518f1 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bfffb0fe9ffb8885e11e7a8e92aeafe5ed4e87fd --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:45:23 compute-0 nova_compute[189491]: 2025-12-01 09:45:23.947 189495 DEBUG oslo_concurrency.processutils [None req-2440282d-4f1c-4f5e-8016-694d753518f1 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/bfffb0fe9ffb8885e11e7a8e92aeafe5ed4e87fd,backing_fmt=raw /var/lib/nova/instances/7535b6dd-3ef8-4847-812d-f0a9208df287/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:45:23 compute-0 nova_compute[189491]: 2025-12-01 09:45:23.988 189495 DEBUG oslo_concurrency.processutils [None req-2440282d-4f1c-4f5e-8016-694d753518f1 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/bfffb0fe9ffb8885e11e7a8e92aeafe5ed4e87fd,backing_fmt=raw /var/lib/nova/instances/7535b6dd-3ef8-4847-812d-f0a9208df287/disk 1073741824" returned: 0 in 0.041s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:45:23 compute-0 nova_compute[189491]: 2025-12-01 09:45:23.990 189495 DEBUG oslo_concurrency.lockutils [None req-2440282d-4f1c-4f5e-8016-694d753518f1 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Lock "bfffb0fe9ffb8885e11e7a8e92aeafe5ed4e87fd" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.117s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:45:23 compute-0 nova_compute[189491]: 2025-12-01 09:45:23.991 189495 DEBUG oslo_concurrency.processutils [None req-2440282d-4f1c-4f5e-8016-694d753518f1 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bfffb0fe9ffb8885e11e7a8e92aeafe5ed4e87fd --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:45:24 compute-0 nova_compute[189491]: 2025-12-01 09:45:24.013 189495 DEBUG nova.policy [None req-2440282d-4f1c-4f5e-8016-694d753518f1 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3f19699d7cb4493292a31daef496a1c2', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ee60ff0d117e468aa42c7d39022568ea', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  1 09:45:24 compute-0 nova_compute[189491]: 2025-12-01 09:45:24.059 189495 DEBUG oslo_concurrency.processutils [None req-2440282d-4f1c-4f5e-8016-694d753518f1 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bfffb0fe9ffb8885e11e7a8e92aeafe5ed4e87fd --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:45:24 compute-0 nova_compute[189491]: 2025-12-01 09:45:24.060 189495 DEBUG nova.virt.disk.api [None req-2440282d-4f1c-4f5e-8016-694d753518f1 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Checking if we can resize image /var/lib/nova/instances/7535b6dd-3ef8-4847-812d-f0a9208df287/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Dec  1 09:45:24 compute-0 nova_compute[189491]: 2025-12-01 09:45:24.061 189495 DEBUG oslo_concurrency.processutils [None req-2440282d-4f1c-4f5e-8016-694d753518f1 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7535b6dd-3ef8-4847-812d-f0a9208df287/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:45:24 compute-0 nova_compute[189491]: 2025-12-01 09:45:24.133 189495 DEBUG oslo_concurrency.processutils [None req-2440282d-4f1c-4f5e-8016-694d753518f1 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7535b6dd-3ef8-4847-812d-f0a9208df287/disk --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:45:24 compute-0 nova_compute[189491]: 2025-12-01 09:45:24.135 189495 DEBUG nova.virt.disk.api [None req-2440282d-4f1c-4f5e-8016-694d753518f1 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Cannot resize image /var/lib/nova/instances/7535b6dd-3ef8-4847-812d-f0a9208df287/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Dec  1 09:45:24 compute-0 nova_compute[189491]: 2025-12-01 09:45:24.135 189495 DEBUG nova.objects.instance [None req-2440282d-4f1c-4f5e-8016-694d753518f1 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Lazy-loading 'migration_context' on Instance uuid 7535b6dd-3ef8-4847-812d-f0a9208df287 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 09:45:24 compute-0 nova_compute[189491]: 2025-12-01 09:45:24.151 189495 DEBUG nova.virt.libvirt.driver [None req-2440282d-4f1c-4f5e-8016-694d753518f1 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] [instance: 7535b6dd-3ef8-4847-812d-f0a9208df287] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  1 09:45:24 compute-0 nova_compute[189491]: 2025-12-01 09:45:24.151 189495 DEBUG nova.virt.libvirt.driver [None req-2440282d-4f1c-4f5e-8016-694d753518f1 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] [instance: 7535b6dd-3ef8-4847-812d-f0a9208df287] Ensure instance console log exists: /var/lib/nova/instances/7535b6dd-3ef8-4847-812d-f0a9208df287/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  1 09:45:24 compute-0 nova_compute[189491]: 2025-12-01 09:45:24.152 189495 DEBUG oslo_concurrency.lockutils [None req-2440282d-4f1c-4f5e-8016-694d753518f1 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:45:24 compute-0 nova_compute[189491]: 2025-12-01 09:45:24.153 189495 DEBUG oslo_concurrency.lockutils [None req-2440282d-4f1c-4f5e-8016-694d753518f1 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:45:24 compute-0 nova_compute[189491]: 2025-12-01 09:45:24.153 189495 DEBUG oslo_concurrency.lockutils [None req-2440282d-4f1c-4f5e-8016-694d753518f1 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:45:24 compute-0 nova_compute[189491]: 2025-12-01 09:45:24.865 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:45:25 compute-0 nova_compute[189491]: 2025-12-01 09:45:25.204 189495 DEBUG nova.network.neutron [None req-2440282d-4f1c-4f5e-8016-694d753518f1 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] [instance: 7535b6dd-3ef8-4847-812d-f0a9208df287] Successfully created port: 5f6c9141-b437-4ca0-bceb-99a3d14bb457 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  1 09:45:25 compute-0 nova_compute[189491]: 2025-12-01 09:45:25.441 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:45:26 compute-0 nova_compute[189491]: 2025-12-01 09:45:26.496 189495 DEBUG nova.network.neutron [None req-2440282d-4f1c-4f5e-8016-694d753518f1 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] [instance: 7535b6dd-3ef8-4847-812d-f0a9208df287] Successfully updated port: 5f6c9141-b437-4ca0-bceb-99a3d14bb457 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  1 09:45:26 compute-0 nova_compute[189491]: 2025-12-01 09:45:26.512 189495 DEBUG oslo_concurrency.lockutils [None req-2440282d-4f1c-4f5e-8016-694d753518f1 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Acquiring lock "refresh_cache-7535b6dd-3ef8-4847-812d-f0a9208df287" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 09:45:26 compute-0 nova_compute[189491]: 2025-12-01 09:45:26.513 189495 DEBUG oslo_concurrency.lockutils [None req-2440282d-4f1c-4f5e-8016-694d753518f1 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Acquired lock "refresh_cache-7535b6dd-3ef8-4847-812d-f0a9208df287" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 09:45:26 compute-0 nova_compute[189491]: 2025-12-01 09:45:26.514 189495 DEBUG nova.network.neutron [None req-2440282d-4f1c-4f5e-8016-694d753518f1 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] [instance: 7535b6dd-3ef8-4847-812d-f0a9208df287] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  1 09:45:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:45:26.536 106659 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:45:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:45:26.537 106659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:45:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:45:26.538 106659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:45:26 compute-0 nova_compute[189491]: 2025-12-01 09:45:26.592 189495 DEBUG nova.compute.manager [req-0fa1d8e1-bb6c-422f-84ea-23aabee47d48 req-c5391027-7c91-414c-af25-431f3ac20352 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 7535b6dd-3ef8-4847-812d-f0a9208df287] Received event network-changed-5f6c9141-b437-4ca0-bceb-99a3d14bb457 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 09:45:26 compute-0 nova_compute[189491]: 2025-12-01 09:45:26.593 189495 DEBUG nova.compute.manager [req-0fa1d8e1-bb6c-422f-84ea-23aabee47d48 req-c5391027-7c91-414c-af25-431f3ac20352 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 7535b6dd-3ef8-4847-812d-f0a9208df287] Refreshing instance network info cache due to event network-changed-5f6c9141-b437-4ca0-bceb-99a3d14bb457. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  1 09:45:26 compute-0 nova_compute[189491]: 2025-12-01 09:45:26.594 189495 DEBUG oslo_concurrency.lockutils [req-0fa1d8e1-bb6c-422f-84ea-23aabee47d48 req-c5391027-7c91-414c-af25-431f3ac20352 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Acquiring lock "refresh_cache-7535b6dd-3ef8-4847-812d-f0a9208df287" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 09:45:26 compute-0 nova_compute[189491]: 2025-12-01 09:45:26.747 189495 DEBUG nova.network.neutron [None req-2440282d-4f1c-4f5e-8016-694d753518f1 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] [instance: 7535b6dd-3ef8-4847-812d-f0a9208df287] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  1 09:45:27 compute-0 nova_compute[189491]: 2025-12-01 09:45:27.923 189495 DEBUG nova.network.neutron [None req-2440282d-4f1c-4f5e-8016-694d753518f1 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] [instance: 7535b6dd-3ef8-4847-812d-f0a9208df287] Updating instance_info_cache with network_info: [{"id": "5f6c9141-b437-4ca0-bceb-99a3d14bb457", "address": "fa:16:3e:8c:34:1f", "network": {"id": "4f3e9b63-cba6-412e-ba07-d66a8b38af02", "bridge": "br-int", "label": "tempest-network-smoke--1085714181", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee60ff0d117e468aa42c7d39022568ea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5f6c9141-b4", "ovs_interfaceid": "5f6c9141-b437-4ca0-bceb-99a3d14bb457", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 09:45:27 compute-0 nova_compute[189491]: 2025-12-01 09:45:27.946 189495 DEBUG oslo_concurrency.lockutils [None req-2440282d-4f1c-4f5e-8016-694d753518f1 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Releasing lock "refresh_cache-7535b6dd-3ef8-4847-812d-f0a9208df287" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 09:45:27 compute-0 nova_compute[189491]: 2025-12-01 09:45:27.947 189495 DEBUG nova.compute.manager [None req-2440282d-4f1c-4f5e-8016-694d753518f1 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] [instance: 7535b6dd-3ef8-4847-812d-f0a9208df287] Instance network_info: |[{"id": "5f6c9141-b437-4ca0-bceb-99a3d14bb457", "address": "fa:16:3e:8c:34:1f", "network": {"id": "4f3e9b63-cba6-412e-ba07-d66a8b38af02", "bridge": "br-int", "label": "tempest-network-smoke--1085714181", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee60ff0d117e468aa42c7d39022568ea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5f6c9141-b4", "ovs_interfaceid": "5f6c9141-b437-4ca0-bceb-99a3d14bb457", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  1 09:45:27 compute-0 nova_compute[189491]: 2025-12-01 09:45:27.948 189495 DEBUG oslo_concurrency.lockutils [req-0fa1d8e1-bb6c-422f-84ea-23aabee47d48 req-c5391027-7c91-414c-af25-431f3ac20352 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Acquired lock "refresh_cache-7535b6dd-3ef8-4847-812d-f0a9208df287" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 09:45:27 compute-0 nova_compute[189491]: 2025-12-01 09:45:27.948 189495 DEBUG nova.network.neutron [req-0fa1d8e1-bb6c-422f-84ea-23aabee47d48 req-c5391027-7c91-414c-af25-431f3ac20352 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 7535b6dd-3ef8-4847-812d-f0a9208df287] Refreshing network info cache for port 5f6c9141-b437-4ca0-bceb-99a3d14bb457 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  1 09:45:27 compute-0 nova_compute[189491]: 2025-12-01 09:45:27.951 189495 DEBUG nova.virt.libvirt.driver [None req-2440282d-4f1c-4f5e-8016-694d753518f1 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] [instance: 7535b6dd-3ef8-4847-812d-f0a9208df287] Start _get_guest_xml network_info=[{"id": "5f6c9141-b437-4ca0-bceb-99a3d14bb457", "address": "fa:16:3e:8c:34:1f", "network": {"id": "4f3e9b63-cba6-412e-ba07-d66a8b38af02", "bridge": "br-int", "label": "tempest-network-smoke--1085714181", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee60ff0d117e468aa42c7d39022568ea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5f6c9141-b4", "ovs_interfaceid": "5f6c9141-b437-4ca0-bceb-99a3d14bb457", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-01T09:41:33Z,direct_url=<?>,disk_format='qcow2',id=7ddeffd1-d06f-4a46-9e41-114974daa90e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='fac95b8a995a4174bfa966a8d9d9aa01',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-01T09:41:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'guest_format': None, 'encryption_options': None, 'device_type': 'disk', 'encryption_secret_uuid': None, 'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'image_id': '7ddeffd1-d06f-4a46-9e41-114974daa90e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  1 09:45:27 compute-0 nova_compute[189491]: 2025-12-01 09:45:27.959 189495 WARNING nova.virt.libvirt.driver [None req-2440282d-4f1c-4f5e-8016-694d753518f1 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 09:45:27 compute-0 nova_compute[189491]: 2025-12-01 09:45:27.969 189495 DEBUG nova.virt.libvirt.host [None req-2440282d-4f1c-4f5e-8016-694d753518f1 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  1 09:45:27 compute-0 nova_compute[189491]: 2025-12-01 09:45:27.970 189495 DEBUG nova.virt.libvirt.host [None req-2440282d-4f1c-4f5e-8016-694d753518f1 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  1 09:45:27 compute-0 nova_compute[189491]: 2025-12-01 09:45:27.975 189495 DEBUG nova.virt.libvirt.host [None req-2440282d-4f1c-4f5e-8016-694d753518f1 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  1 09:45:27 compute-0 nova_compute[189491]: 2025-12-01 09:45:27.977 189495 DEBUG nova.virt.libvirt.host [None req-2440282d-4f1c-4f5e-8016-694d753518f1 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  1 09:45:27 compute-0 nova_compute[189491]: 2025-12-01 09:45:27.978 189495 DEBUG nova.virt.libvirt.driver [None req-2440282d-4f1c-4f5e-8016-694d753518f1 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  1 09:45:27 compute-0 nova_compute[189491]: 2025-12-01 09:45:27.978 189495 DEBUG nova.virt.hardware [None req-2440282d-4f1c-4f5e-8016-694d753518f1 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-01T09:41:32Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='422f041c-a187-4aa2-8167-37f3eb0e89c2',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-01T09:41:33Z,direct_url=<?>,disk_format='qcow2',id=7ddeffd1-d06f-4a46-9e41-114974daa90e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='fac95b8a995a4174bfa966a8d9d9aa01',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-01T09:41:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  1 09:45:27 compute-0 nova_compute[189491]: 2025-12-01 09:45:27.979 189495 DEBUG nova.virt.hardware [None req-2440282d-4f1c-4f5e-8016-694d753518f1 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  1 09:45:27 compute-0 nova_compute[189491]: 2025-12-01 09:45:27.979 189495 DEBUG nova.virt.hardware [None req-2440282d-4f1c-4f5e-8016-694d753518f1 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  1 09:45:27 compute-0 nova_compute[189491]: 2025-12-01 09:45:27.980 189495 DEBUG nova.virt.hardware [None req-2440282d-4f1c-4f5e-8016-694d753518f1 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  1 09:45:27 compute-0 nova_compute[189491]: 2025-12-01 09:45:27.980 189495 DEBUG nova.virt.hardware [None req-2440282d-4f1c-4f5e-8016-694d753518f1 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  1 09:45:27 compute-0 nova_compute[189491]: 2025-12-01 09:45:27.981 189495 DEBUG nova.virt.hardware [None req-2440282d-4f1c-4f5e-8016-694d753518f1 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  1 09:45:27 compute-0 nova_compute[189491]: 2025-12-01 09:45:27.981 189495 DEBUG nova.virt.hardware [None req-2440282d-4f1c-4f5e-8016-694d753518f1 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  1 09:45:27 compute-0 nova_compute[189491]: 2025-12-01 09:45:27.982 189495 DEBUG nova.virt.hardware [None req-2440282d-4f1c-4f5e-8016-694d753518f1 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  1 09:45:27 compute-0 nova_compute[189491]: 2025-12-01 09:45:27.982 189495 DEBUG nova.virt.hardware [None req-2440282d-4f1c-4f5e-8016-694d753518f1 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  1 09:45:27 compute-0 nova_compute[189491]: 2025-12-01 09:45:27.983 189495 DEBUG nova.virt.hardware [None req-2440282d-4f1c-4f5e-8016-694d753518f1 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  1 09:45:27 compute-0 nova_compute[189491]: 2025-12-01 09:45:27.983 189495 DEBUG nova.virt.hardware [None req-2440282d-4f1c-4f5e-8016-694d753518f1 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  1 09:45:27 compute-0 nova_compute[189491]: 2025-12-01 09:45:27.988 189495 DEBUG nova.virt.libvirt.vif [None req-2440282d-4f1c-4f5e-8016-694d753518f1 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-01T09:45:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1346121752',display_name='tempest-TestNetworkBasicOps-server-1346121752',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1346121752',id=12,image_ref='7ddeffd1-d06f-4a46-9e41-114974daa90e',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDzY2QooqBKCgrVmFm9t9G6kUoxRR7Z58hf2jxLG81LTp7tA7B5s3qGHwrOLAvUIw9FkUrXmSb+JOXMns7AV8is1dyQKTdDiNnfExt9nI0JCJ7U4FIFbUzsyCbyBdqeGug==',key_name='tempest-TestNetworkBasicOps-871464086',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ee60ff0d117e468aa42c7d39022568ea',ramdisk_id='',reservation_id='r-mcvw58o0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7ddeffd1-d06f-4a46-9e41-114974daa90e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-291434657',owner_user_name='tempest-TestNetworkBasicOps-291434657-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-01T09:45:23Z,user_data=None,user_id='3f19699d7cb4493292a31daef496a1c2',uuid=7535b6dd-3ef8-4847-812d-f0a9208df287,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5f6c9141-b437-4ca0-bceb-99a3d14bb457", "address": "fa:16:3e:8c:34:1f", "network": {"id": "4f3e9b63-cba6-412e-ba07-d66a8b38af02", "bridge": "br-int", "label": "tempest-network-smoke--1085714181", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee60ff0d117e468aa42c7d39022568ea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5f6c9141-b4", "ovs_interfaceid": "5f6c9141-b437-4ca0-bceb-99a3d14bb457", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  1 09:45:27 compute-0 nova_compute[189491]: 2025-12-01 09:45:27.989 189495 DEBUG nova.network.os_vif_util [None req-2440282d-4f1c-4f5e-8016-694d753518f1 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Converting VIF {"id": "5f6c9141-b437-4ca0-bceb-99a3d14bb457", "address": "fa:16:3e:8c:34:1f", "network": {"id": "4f3e9b63-cba6-412e-ba07-d66a8b38af02", "bridge": "br-int", "label": "tempest-network-smoke--1085714181", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee60ff0d117e468aa42c7d39022568ea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5f6c9141-b4", "ovs_interfaceid": "5f6c9141-b437-4ca0-bceb-99a3d14bb457", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 09:45:27 compute-0 nova_compute[189491]: 2025-12-01 09:45:27.990 189495 DEBUG nova.network.os_vif_util [None req-2440282d-4f1c-4f5e-8016-694d753518f1 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8c:34:1f,bridge_name='br-int',has_traffic_filtering=True,id=5f6c9141-b437-4ca0-bceb-99a3d14bb457,network=Network(4f3e9b63-cba6-412e-ba07-d66a8b38af02),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5f6c9141-b4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 09:45:27 compute-0 nova_compute[189491]: 2025-12-01 09:45:27.991 189495 DEBUG nova.objects.instance [None req-2440282d-4f1c-4f5e-8016-694d753518f1 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Lazy-loading 'pci_devices' on Instance uuid 7535b6dd-3ef8-4847-812d-f0a9208df287 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 09:45:28 compute-0 nova_compute[189491]: 2025-12-01 09:45:28.010 189495 DEBUG nova.virt.libvirt.driver [None req-2440282d-4f1c-4f5e-8016-694d753518f1 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] [instance: 7535b6dd-3ef8-4847-812d-f0a9208df287] End _get_guest_xml xml=<domain type="kvm">
Dec  1 09:45:28 compute-0 nova_compute[189491]:  <uuid>7535b6dd-3ef8-4847-812d-f0a9208df287</uuid>
Dec  1 09:45:28 compute-0 nova_compute[189491]:  <name>instance-0000000c</name>
Dec  1 09:45:28 compute-0 nova_compute[189491]:  <memory>131072</memory>
Dec  1 09:45:28 compute-0 nova_compute[189491]:  <vcpu>1</vcpu>
Dec  1 09:45:28 compute-0 nova_compute[189491]:  <metadata>
Dec  1 09:45:28 compute-0 nova_compute[189491]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  1 09:45:28 compute-0 nova_compute[189491]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  1 09:45:28 compute-0 nova_compute[189491]:      <nova:name>tempest-TestNetworkBasicOps-server-1346121752</nova:name>
Dec  1 09:45:28 compute-0 nova_compute[189491]:      <nova:creationTime>2025-12-01 09:45:27</nova:creationTime>
Dec  1 09:45:28 compute-0 nova_compute[189491]:      <nova:flavor name="m1.nano">
Dec  1 09:45:28 compute-0 nova_compute[189491]:        <nova:memory>128</nova:memory>
Dec  1 09:45:28 compute-0 nova_compute[189491]:        <nova:disk>1</nova:disk>
Dec  1 09:45:28 compute-0 nova_compute[189491]:        <nova:swap>0</nova:swap>
Dec  1 09:45:28 compute-0 nova_compute[189491]:        <nova:ephemeral>0</nova:ephemeral>
Dec  1 09:45:28 compute-0 nova_compute[189491]:        <nova:vcpus>1</nova:vcpus>
Dec  1 09:45:28 compute-0 nova_compute[189491]:      </nova:flavor>
Dec  1 09:45:28 compute-0 nova_compute[189491]:      <nova:owner>
Dec  1 09:45:28 compute-0 nova_compute[189491]:        <nova:user uuid="3f19699d7cb4493292a31daef496a1c2">tempest-TestNetworkBasicOps-291434657-project-member</nova:user>
Dec  1 09:45:28 compute-0 nova_compute[189491]:        <nova:project uuid="ee60ff0d117e468aa42c7d39022568ea">tempest-TestNetworkBasicOps-291434657</nova:project>
Dec  1 09:45:28 compute-0 nova_compute[189491]:      </nova:owner>
Dec  1 09:45:28 compute-0 nova_compute[189491]:      <nova:root type="image" uuid="7ddeffd1-d06f-4a46-9e41-114974daa90e"/>
Dec  1 09:45:28 compute-0 nova_compute[189491]:      <nova:ports>
Dec  1 09:45:28 compute-0 nova_compute[189491]:        <nova:port uuid="5f6c9141-b437-4ca0-bceb-99a3d14bb457">
Dec  1 09:45:28 compute-0 nova_compute[189491]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Dec  1 09:45:28 compute-0 nova_compute[189491]:        </nova:port>
Dec  1 09:45:28 compute-0 nova_compute[189491]:      </nova:ports>
Dec  1 09:45:28 compute-0 nova_compute[189491]:    </nova:instance>
Dec  1 09:45:28 compute-0 nova_compute[189491]:  </metadata>
Dec  1 09:45:28 compute-0 nova_compute[189491]:  <sysinfo type="smbios">
Dec  1 09:45:28 compute-0 nova_compute[189491]:    <system>
Dec  1 09:45:28 compute-0 nova_compute[189491]:      <entry name="manufacturer">RDO</entry>
Dec  1 09:45:28 compute-0 nova_compute[189491]:      <entry name="product">OpenStack Compute</entry>
Dec  1 09:45:28 compute-0 nova_compute[189491]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  1 09:45:28 compute-0 nova_compute[189491]:      <entry name="serial">7535b6dd-3ef8-4847-812d-f0a9208df287</entry>
Dec  1 09:45:28 compute-0 nova_compute[189491]:      <entry name="uuid">7535b6dd-3ef8-4847-812d-f0a9208df287</entry>
Dec  1 09:45:28 compute-0 nova_compute[189491]:      <entry name="family">Virtual Machine</entry>
Dec  1 09:45:28 compute-0 nova_compute[189491]:    </system>
Dec  1 09:45:28 compute-0 nova_compute[189491]:  </sysinfo>
Dec  1 09:45:28 compute-0 nova_compute[189491]:  <os>
Dec  1 09:45:28 compute-0 nova_compute[189491]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  1 09:45:28 compute-0 nova_compute[189491]:    <boot dev="hd"/>
Dec  1 09:45:28 compute-0 nova_compute[189491]:    <smbios mode="sysinfo"/>
Dec  1 09:45:28 compute-0 nova_compute[189491]:  </os>
Dec  1 09:45:28 compute-0 nova_compute[189491]:  <features>
Dec  1 09:45:28 compute-0 nova_compute[189491]:    <acpi/>
Dec  1 09:45:28 compute-0 nova_compute[189491]:    <apic/>
Dec  1 09:45:28 compute-0 nova_compute[189491]:    <vmcoreinfo/>
Dec  1 09:45:28 compute-0 nova_compute[189491]:  </features>
Dec  1 09:45:28 compute-0 nova_compute[189491]:  <clock offset="utc">
Dec  1 09:45:28 compute-0 nova_compute[189491]:    <timer name="pit" tickpolicy="delay"/>
Dec  1 09:45:28 compute-0 nova_compute[189491]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  1 09:45:28 compute-0 nova_compute[189491]:    <timer name="hpet" present="no"/>
Dec  1 09:45:28 compute-0 nova_compute[189491]:  </clock>
Dec  1 09:45:28 compute-0 nova_compute[189491]:  <cpu mode="host-model" match="exact">
Dec  1 09:45:28 compute-0 nova_compute[189491]:    <topology sockets="1" cores="1" threads="1"/>
Dec  1 09:45:28 compute-0 nova_compute[189491]:  </cpu>
Dec  1 09:45:28 compute-0 nova_compute[189491]:  <devices>
Dec  1 09:45:28 compute-0 nova_compute[189491]:    <disk type="file" device="disk">
Dec  1 09:45:28 compute-0 nova_compute[189491]:      <driver name="qemu" type="qcow2" cache="none"/>
Dec  1 09:45:28 compute-0 nova_compute[189491]:      <source file="/var/lib/nova/instances/7535b6dd-3ef8-4847-812d-f0a9208df287/disk"/>
Dec  1 09:45:28 compute-0 nova_compute[189491]:      <target dev="vda" bus="virtio"/>
Dec  1 09:45:28 compute-0 nova_compute[189491]:    </disk>
Dec  1 09:45:28 compute-0 nova_compute[189491]:    <disk type="file" device="cdrom">
Dec  1 09:45:28 compute-0 nova_compute[189491]:      <driver name="qemu" type="raw" cache="none"/>
Dec  1 09:45:28 compute-0 nova_compute[189491]:      <source file="/var/lib/nova/instances/7535b6dd-3ef8-4847-812d-f0a9208df287/disk.config"/>
Dec  1 09:45:28 compute-0 nova_compute[189491]:      <target dev="sda" bus="sata"/>
Dec  1 09:45:28 compute-0 nova_compute[189491]:    </disk>
Dec  1 09:45:28 compute-0 nova_compute[189491]:    <interface type="ethernet">
Dec  1 09:45:28 compute-0 nova_compute[189491]:      <mac address="fa:16:3e:8c:34:1f"/>
Dec  1 09:45:28 compute-0 nova_compute[189491]:      <model type="virtio"/>
Dec  1 09:45:28 compute-0 nova_compute[189491]:      <driver name="vhost" rx_queue_size="512"/>
Dec  1 09:45:28 compute-0 nova_compute[189491]:      <mtu size="1442"/>
Dec  1 09:45:28 compute-0 nova_compute[189491]:      <target dev="tap5f6c9141-b4"/>
Dec  1 09:45:28 compute-0 nova_compute[189491]:    </interface>
Dec  1 09:45:28 compute-0 nova_compute[189491]:    <serial type="pty">
Dec  1 09:45:28 compute-0 nova_compute[189491]:      <log file="/var/lib/nova/instances/7535b6dd-3ef8-4847-812d-f0a9208df287/console.log" append="off"/>
Dec  1 09:45:28 compute-0 nova_compute[189491]:    </serial>
Dec  1 09:45:28 compute-0 nova_compute[189491]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  1 09:45:28 compute-0 nova_compute[189491]:    <video>
Dec  1 09:45:28 compute-0 nova_compute[189491]:      <model type="virtio"/>
Dec  1 09:45:28 compute-0 nova_compute[189491]:    </video>
Dec  1 09:45:28 compute-0 nova_compute[189491]:    <input type="tablet" bus="usb"/>
Dec  1 09:45:28 compute-0 nova_compute[189491]:    <rng model="virtio">
Dec  1 09:45:28 compute-0 nova_compute[189491]:      <backend model="random">/dev/urandom</backend>
Dec  1 09:45:28 compute-0 nova_compute[189491]:    </rng>
Dec  1 09:45:28 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root"/>
Dec  1 09:45:28 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:45:28 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:45:28 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:45:28 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:45:28 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:45:28 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:45:28 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:45:28 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:45:28 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:45:28 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:45:28 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:45:28 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:45:28 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:45:28 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:45:28 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:45:28 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:45:28 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:45:28 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:45:28 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:45:28 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:45:28 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:45:28 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:45:28 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:45:28 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:45:28 compute-0 nova_compute[189491]:    <controller type="usb" index="0"/>
Dec  1 09:45:28 compute-0 nova_compute[189491]:    <memballoon model="virtio">
Dec  1 09:45:28 compute-0 nova_compute[189491]:      <stats period="10"/>
Dec  1 09:45:28 compute-0 nova_compute[189491]:    </memballoon>
Dec  1 09:45:28 compute-0 nova_compute[189491]:  </devices>
Dec  1 09:45:28 compute-0 nova_compute[189491]: </domain>
Dec  1 09:45:28 compute-0 nova_compute[189491]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  1 09:45:28 compute-0 nova_compute[189491]: 2025-12-01 09:45:28.011 189495 DEBUG nova.compute.manager [None req-2440282d-4f1c-4f5e-8016-694d753518f1 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] [instance: 7535b6dd-3ef8-4847-812d-f0a9208df287] Preparing to wait for external event network-vif-plugged-5f6c9141-b437-4ca0-bceb-99a3d14bb457 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  1 09:45:28 compute-0 nova_compute[189491]: 2025-12-01 09:45:28.011 189495 DEBUG oslo_concurrency.lockutils [None req-2440282d-4f1c-4f5e-8016-694d753518f1 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Acquiring lock "7535b6dd-3ef8-4847-812d-f0a9208df287-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:45:28 compute-0 nova_compute[189491]: 2025-12-01 09:45:28.012 189495 DEBUG oslo_concurrency.lockutils [None req-2440282d-4f1c-4f5e-8016-694d753518f1 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Lock "7535b6dd-3ef8-4847-812d-f0a9208df287-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:45:28 compute-0 nova_compute[189491]: 2025-12-01 09:45:28.012 189495 DEBUG oslo_concurrency.lockutils [None req-2440282d-4f1c-4f5e-8016-694d753518f1 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Lock "7535b6dd-3ef8-4847-812d-f0a9208df287-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:45:28 compute-0 nova_compute[189491]: 2025-12-01 09:45:28.014 189495 DEBUG nova.virt.libvirt.vif [None req-2440282d-4f1c-4f5e-8016-694d753518f1 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-01T09:45:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1346121752',display_name='tempest-TestNetworkBasicOps-server-1346121752',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1346121752',id=12,image_ref='7ddeffd1-d06f-4a46-9e41-114974daa90e',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDzY2QooqBKCgrVmFm9t9G6kUoxRR7Z58hf2jxLG81LTp7tA7B5s3qGHwrOLAvUIw9FkUrXmSb+JOXMns7AV8is1dyQKTdDiNnfExt9nI0JCJ7U4FIFbUzsyCbyBdqeGug==',key_name='tempest-TestNetworkBasicOps-871464086',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ee60ff0d117e468aa42c7d39022568ea',ramdisk_id='',reservation_id='r-mcvw58o0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7ddeffd1-d06f-4a46-9e41-114974daa90e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-291434657',owner_user_name='tempest-TestNetworkBasicOps-291434657-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-01T09:45:23Z,user_data=None,user_id='3f19699d7cb4493292a31daef496a1c2',uuid=7535b6dd-3ef8-4847-812d-f0a9208df287,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5f6c9141-b437-4ca0-bceb-99a3d14bb457", "address": "fa:16:3e:8c:34:1f", "network": {"id": "4f3e9b63-cba6-412e-ba07-d66a8b38af02", "bridge": "br-int", "label": "tempest-network-smoke--1085714181", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee60ff0d117e468aa42c7d39022568ea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5f6c9141-b4", "ovs_interfaceid": "5f6c9141-b437-4ca0-bceb-99a3d14bb457", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  1 09:45:28 compute-0 nova_compute[189491]: 2025-12-01 09:45:28.014 189495 DEBUG nova.network.os_vif_util [None req-2440282d-4f1c-4f5e-8016-694d753518f1 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Converting VIF {"id": "5f6c9141-b437-4ca0-bceb-99a3d14bb457", "address": "fa:16:3e:8c:34:1f", "network": {"id": "4f3e9b63-cba6-412e-ba07-d66a8b38af02", "bridge": "br-int", "label": "tempest-network-smoke--1085714181", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee60ff0d117e468aa42c7d39022568ea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5f6c9141-b4", "ovs_interfaceid": "5f6c9141-b437-4ca0-bceb-99a3d14bb457", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 09:45:28 compute-0 nova_compute[189491]: 2025-12-01 09:45:28.015 189495 DEBUG nova.network.os_vif_util [None req-2440282d-4f1c-4f5e-8016-694d753518f1 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8c:34:1f,bridge_name='br-int',has_traffic_filtering=True,id=5f6c9141-b437-4ca0-bceb-99a3d14bb457,network=Network(4f3e9b63-cba6-412e-ba07-d66a8b38af02),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5f6c9141-b4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 09:45:28 compute-0 nova_compute[189491]: 2025-12-01 09:45:28.015 189495 DEBUG os_vif [None req-2440282d-4f1c-4f5e-8016-694d753518f1 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8c:34:1f,bridge_name='br-int',has_traffic_filtering=True,id=5f6c9141-b437-4ca0-bceb-99a3d14bb457,network=Network(4f3e9b63-cba6-412e-ba07-d66a8b38af02),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5f6c9141-b4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  1 09:45:28 compute-0 nova_compute[189491]: 2025-12-01 09:45:28.016 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:45:28 compute-0 nova_compute[189491]: 2025-12-01 09:45:28.016 189495 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:45:28 compute-0 nova_compute[189491]: 2025-12-01 09:45:28.017 189495 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 09:45:28 compute-0 nova_compute[189491]: 2025-12-01 09:45:28.021 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:45:28 compute-0 nova_compute[189491]: 2025-12-01 09:45:28.021 189495 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5f6c9141-b4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:45:28 compute-0 nova_compute[189491]: 2025-12-01 09:45:28.022 189495 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5f6c9141-b4, col_values=(('external_ids', {'iface-id': '5f6c9141-b437-4ca0-bceb-99a3d14bb457', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8c:34:1f', 'vm-uuid': '7535b6dd-3ef8-4847-812d-f0a9208df287'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:45:28 compute-0 nova_compute[189491]: 2025-12-01 09:45:28.024 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:45:28 compute-0 NetworkManager[56318]: <info>  [1764582328.0267] manager: (tap5f6c9141-b4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/63)
Dec  1 09:45:28 compute-0 nova_compute[189491]: 2025-12-01 09:45:28.027 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  1 09:45:28 compute-0 nova_compute[189491]: 2025-12-01 09:45:28.036 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:45:28 compute-0 nova_compute[189491]: 2025-12-01 09:45:28.037 189495 INFO os_vif [None req-2440282d-4f1c-4f5e-8016-694d753518f1 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8c:34:1f,bridge_name='br-int',has_traffic_filtering=True,id=5f6c9141-b437-4ca0-bceb-99a3d14bb457,network=Network(4f3e9b63-cba6-412e-ba07-d66a8b38af02),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5f6c9141-b4')#033[00m
Dec  1 09:45:28 compute-0 nova_compute[189491]: 2025-12-01 09:45:28.107 189495 DEBUG nova.virt.libvirt.driver [None req-2440282d-4f1c-4f5e-8016-694d753518f1 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  1 09:45:28 compute-0 nova_compute[189491]: 2025-12-01 09:45:28.107 189495 DEBUG nova.virt.libvirt.driver [None req-2440282d-4f1c-4f5e-8016-694d753518f1 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  1 09:45:28 compute-0 nova_compute[189491]: 2025-12-01 09:45:28.108 189495 DEBUG nova.virt.libvirt.driver [None req-2440282d-4f1c-4f5e-8016-694d753518f1 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] No VIF found with MAC fa:16:3e:8c:34:1f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  1 09:45:28 compute-0 nova_compute[189491]: 2025-12-01 09:45:28.108 189495 INFO nova.virt.libvirt.driver [None req-2440282d-4f1c-4f5e-8016-694d753518f1 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] [instance: 7535b6dd-3ef8-4847-812d-f0a9208df287] Using config drive#033[00m
Dec  1 09:45:28 compute-0 nova_compute[189491]: 2025-12-01 09:45:28.174 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:45:28 compute-0 nova_compute[189491]: 2025-12-01 09:45:28.871 189495 INFO nova.virt.libvirt.driver [None req-2440282d-4f1c-4f5e-8016-694d753518f1 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] [instance: 7535b6dd-3ef8-4847-812d-f0a9208df287] Creating config drive at /var/lib/nova/instances/7535b6dd-3ef8-4847-812d-f0a9208df287/disk.config#033[00m
Dec  1 09:45:28 compute-0 nova_compute[189491]: 2025-12-01 09:45:28.878 189495 DEBUG oslo_concurrency.processutils [None req-2440282d-4f1c-4f5e-8016-694d753518f1 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7535b6dd-3ef8-4847-812d-f0a9208df287/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpi0gtqq6c execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:45:29 compute-0 nova_compute[189491]: 2025-12-01 09:45:29.007 189495 DEBUG oslo_concurrency.processutils [None req-2440282d-4f1c-4f5e-8016-694d753518f1 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7535b6dd-3ef8-4847-812d-f0a9208df287/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpi0gtqq6c" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:45:29 compute-0 kernel: tap5f6c9141-b4: entered promiscuous mode
Dec  1 09:45:29 compute-0 ovn_controller[97794]: 2025-12-01T09:45:29Z|00126|binding|INFO|Claiming lport 5f6c9141-b437-4ca0-bceb-99a3d14bb457 for this chassis.
Dec  1 09:45:29 compute-0 ovn_controller[97794]: 2025-12-01T09:45:29Z|00127|binding|INFO|5f6c9141-b437-4ca0-bceb-99a3d14bb457: Claiming fa:16:3e:8c:34:1f 10.100.0.6
Dec  1 09:45:29 compute-0 NetworkManager[56318]: <info>  [1764582329.0857] manager: (tap5f6c9141-b4): new Tun device (/org/freedesktop/NetworkManager/Devices/64)
Dec  1 09:45:29 compute-0 nova_compute[189491]: 2025-12-01 09:45:29.088 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:45:29 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:45:29.099 106659 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8c:34:1f 10.100.0.6'], port_security=['fa:16:3e:8c:34:1f 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '7535b6dd-3ef8-4847-812d-f0a9208df287', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4f3e9b63-cba6-412e-ba07-d66a8b38af02', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ee60ff0d117e468aa42c7d39022568ea', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8632ed1a-81ae-4d44-8a48-0770ed769e4c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=45465482-a276-408a-8d6b-656a92e66817, chassis=[<ovs.db.idl.Row object at 0x7ff7f12c3670>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff7f12c3670>], logical_port=5f6c9141-b437-4ca0-bceb-99a3d14bb457) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 09:45:29 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:45:29.101 106659 INFO neutron.agent.ovn.metadata.agent [-] Port 5f6c9141-b437-4ca0-bceb-99a3d14bb457 in datapath 4f3e9b63-cba6-412e-ba07-d66a8b38af02 bound to our chassis#033[00m
Dec  1 09:45:29 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:45:29.104 106659 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4f3e9b63-cba6-412e-ba07-d66a8b38af02#033[00m
Dec  1 09:45:29 compute-0 ovn_controller[97794]: 2025-12-01T09:45:29Z|00128|binding|INFO|Setting lport 5f6c9141-b437-4ca0-bceb-99a3d14bb457 ovn-installed in OVS
Dec  1 09:45:29 compute-0 ovn_controller[97794]: 2025-12-01T09:45:29Z|00129|binding|INFO|Setting lport 5f6c9141-b437-4ca0-bceb-99a3d14bb457 up in Southbound
Dec  1 09:45:29 compute-0 nova_compute[189491]: 2025-12-01 09:45:29.115 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:45:29 compute-0 nova_compute[189491]: 2025-12-01 09:45:29.126 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:45:29 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:45:29.124 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[416976e9-19c7-41f3-82f5-7ff5f67ada8c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:45:29 compute-0 systemd-udevd[254184]: Network interface NamePolicy= disabled on kernel command line.
Dec  1 09:45:29 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:45:29.161 239843 DEBUG oslo.privsep.daemon [-] privsep: reply[c4b85939-3ce0-4fed-a6ef-f9bfb02088d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:45:29 compute-0 systemd-machined[155812]: New machine qemu-13-instance-0000000c.
Dec  1 09:45:29 compute-0 NetworkManager[56318]: <info>  [1764582329.1663] device (tap5f6c9141-b4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  1 09:45:29 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:45:29.165 239843 DEBUG oslo.privsep.daemon [-] privsep: reply[5008c54f-1db6-4701-9ef8-a866627f1fc0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:45:29 compute-0 NetworkManager[56318]: <info>  [1764582329.1708] device (tap5f6c9141-b4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  1 09:45:29 compute-0 systemd[1]: Started Virtual Machine qemu-13-instance-0000000c.
Dec  1 09:45:29 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:45:29.207 239843 DEBUG oslo.privsep.daemon [-] privsep: reply[3ffeb31e-30e2-40c1-ac1b-7ba40ba3a675]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:45:29 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:45:29.235 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[2eb391ab-c787-4cb2-b157-2779704f1d71]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4f3e9b63-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:66:a3:d6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 550202, 'reachable_time': 33319, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 254191, 'error': None, 'target': 'ovnmeta-4f3e9b63-cba6-412e-ba07-d66a8b38af02', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:45:29 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:45:29.258 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[9531c2be-c79c-491d-846c-5375d2d88d2f]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap4f3e9b63-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 550218, 'tstamp': 550218}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 254197, 'error': None, 'target': 'ovnmeta-4f3e9b63-cba6-412e-ba07-d66a8b38af02', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4f3e9b63-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 550221, 'tstamp': 550221}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 254197, 'error': None, 'target': 'ovnmeta-4f3e9b63-cba6-412e-ba07-d66a8b38af02', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:45:29 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:45:29.265 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4f3e9b63-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:45:29 compute-0 nova_compute[189491]: 2025-12-01 09:45:29.268 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:45:29 compute-0 nova_compute[189491]: 2025-12-01 09:45:29.270 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:45:29 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:45:29.271 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4f3e9b63-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:45:29 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:45:29.272 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 09:45:29 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:45:29.273 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4f3e9b63-c0, col_values=(('external_ids', {'iface-id': 'a52d5841-c07f-4d57-abbb-5b84c6008243'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:45:29 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:45:29.283 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 09:45:29 compute-0 nova_compute[189491]: 2025-12-01 09:45:29.538 189495 DEBUG nova.virt.driver [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] Emitting event <LifecycleEvent: 1764582329.538308, 7535b6dd-3ef8-4847-812d-f0a9208df287 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 09:45:29 compute-0 nova_compute[189491]: 2025-12-01 09:45:29.538 189495 INFO nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: 7535b6dd-3ef8-4847-812d-f0a9208df287] VM Started (Lifecycle Event)#033[00m
Dec  1 09:45:29 compute-0 nova_compute[189491]: 2025-12-01 09:45:29.558 189495 DEBUG nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: 7535b6dd-3ef8-4847-812d-f0a9208df287] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 09:45:29 compute-0 nova_compute[189491]: 2025-12-01 09:45:29.565 189495 DEBUG nova.virt.driver [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] Emitting event <LifecycleEvent: 1764582329.5383883, 7535b6dd-3ef8-4847-812d-f0a9208df287 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 09:45:29 compute-0 nova_compute[189491]: 2025-12-01 09:45:29.565 189495 INFO nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: 7535b6dd-3ef8-4847-812d-f0a9208df287] VM Paused (Lifecycle Event)#033[00m
Dec  1 09:45:29 compute-0 nova_compute[189491]: 2025-12-01 09:45:29.583 189495 DEBUG nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: 7535b6dd-3ef8-4847-812d-f0a9208df287] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 09:45:29 compute-0 nova_compute[189491]: 2025-12-01 09:45:29.587 189495 DEBUG nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: 7535b6dd-3ef8-4847-812d-f0a9208df287] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  1 09:45:29 compute-0 nova_compute[189491]: 2025-12-01 09:45:29.603 189495 DEBUG nova.network.neutron [req-0fa1d8e1-bb6c-422f-84ea-23aabee47d48 req-c5391027-7c91-414c-af25-431f3ac20352 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 7535b6dd-3ef8-4847-812d-f0a9208df287] Updated VIF entry in instance network info cache for port 5f6c9141-b437-4ca0-bceb-99a3d14bb457. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  1 09:45:29 compute-0 nova_compute[189491]: 2025-12-01 09:45:29.604 189495 DEBUG nova.network.neutron [req-0fa1d8e1-bb6c-422f-84ea-23aabee47d48 req-c5391027-7c91-414c-af25-431f3ac20352 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 7535b6dd-3ef8-4847-812d-f0a9208df287] Updating instance_info_cache with network_info: [{"id": "5f6c9141-b437-4ca0-bceb-99a3d14bb457", "address": "fa:16:3e:8c:34:1f", "network": {"id": "4f3e9b63-cba6-412e-ba07-d66a8b38af02", "bridge": "br-int", "label": "tempest-network-smoke--1085714181", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee60ff0d117e468aa42c7d39022568ea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5f6c9141-b4", "ovs_interfaceid": "5f6c9141-b437-4ca0-bceb-99a3d14bb457", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 09:45:29 compute-0 nova_compute[189491]: 2025-12-01 09:45:29.608 189495 INFO nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: 7535b6dd-3ef8-4847-812d-f0a9208df287] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  1 09:45:29 compute-0 nova_compute[189491]: 2025-12-01 09:45:29.618 189495 DEBUG oslo_concurrency.lockutils [req-0fa1d8e1-bb6c-422f-84ea-23aabee47d48 req-c5391027-7c91-414c-af25-431f3ac20352 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Releasing lock "refresh_cache-7535b6dd-3ef8-4847-812d-f0a9208df287" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 09:45:29 compute-0 podman[203700]: time="2025-12-01T09:45:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 09:45:29 compute-0 podman[203700]: @ - - [01/Dec/2025:09:45:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 31990 "" "Go-http-client/1.1"
Dec  1 09:45:29 compute-0 podman[203700]: @ - - [01/Dec/2025:09:45:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5736 "" "Go-http-client/1.1"
Dec  1 09:45:30 compute-0 nova_compute[189491]: 2025-12-01 09:45:30.580 189495 DEBUG nova.compute.manager [req-c3e56cce-0f4a-45b9-ba57-e2de6c671bc1 req-b457f7dd-651b-406a-90aa-1cdee2a286b3 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 7535b6dd-3ef8-4847-812d-f0a9208df287] Received event network-vif-plugged-5f6c9141-b437-4ca0-bceb-99a3d14bb457 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 09:45:30 compute-0 nova_compute[189491]: 2025-12-01 09:45:30.582 189495 DEBUG oslo_concurrency.lockutils [req-c3e56cce-0f4a-45b9-ba57-e2de6c671bc1 req-b457f7dd-651b-406a-90aa-1cdee2a286b3 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Acquiring lock "7535b6dd-3ef8-4847-812d-f0a9208df287-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:45:30 compute-0 nova_compute[189491]: 2025-12-01 09:45:30.583 189495 DEBUG oslo_concurrency.lockutils [req-c3e56cce-0f4a-45b9-ba57-e2de6c671bc1 req-b457f7dd-651b-406a-90aa-1cdee2a286b3 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Lock "7535b6dd-3ef8-4847-812d-f0a9208df287-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:45:30 compute-0 nova_compute[189491]: 2025-12-01 09:45:30.583 189495 DEBUG oslo_concurrency.lockutils [req-c3e56cce-0f4a-45b9-ba57-e2de6c671bc1 req-b457f7dd-651b-406a-90aa-1cdee2a286b3 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Lock "7535b6dd-3ef8-4847-812d-f0a9208df287-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:45:30 compute-0 nova_compute[189491]: 2025-12-01 09:45:30.584 189495 DEBUG nova.compute.manager [req-c3e56cce-0f4a-45b9-ba57-e2de6c671bc1 req-b457f7dd-651b-406a-90aa-1cdee2a286b3 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 7535b6dd-3ef8-4847-812d-f0a9208df287] Processing event network-vif-plugged-5f6c9141-b437-4ca0-bceb-99a3d14bb457 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  1 09:45:30 compute-0 nova_compute[189491]: 2025-12-01 09:45:30.585 189495 DEBUG nova.compute.manager [None req-2440282d-4f1c-4f5e-8016-694d753518f1 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] [instance: 7535b6dd-3ef8-4847-812d-f0a9208df287] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  1 09:45:30 compute-0 nova_compute[189491]: 2025-12-01 09:45:30.609 189495 DEBUG nova.virt.driver [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] Emitting event <LifecycleEvent: 1764582330.5914783, 7535b6dd-3ef8-4847-812d-f0a9208df287 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 09:45:30 compute-0 nova_compute[189491]: 2025-12-01 09:45:30.610 189495 INFO nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: 7535b6dd-3ef8-4847-812d-f0a9208df287] VM Resumed (Lifecycle Event)#033[00m
Dec  1 09:45:30 compute-0 nova_compute[189491]: 2025-12-01 09:45:30.612 189495 DEBUG nova.virt.libvirt.driver [None req-2440282d-4f1c-4f5e-8016-694d753518f1 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] [instance: 7535b6dd-3ef8-4847-812d-f0a9208df287] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  1 09:45:30 compute-0 nova_compute[189491]: 2025-12-01 09:45:30.619 189495 INFO nova.virt.libvirt.driver [-] [instance: 7535b6dd-3ef8-4847-812d-f0a9208df287] Instance spawned successfully.#033[00m
Dec  1 09:45:30 compute-0 nova_compute[189491]: 2025-12-01 09:45:30.620 189495 DEBUG nova.virt.libvirt.driver [None req-2440282d-4f1c-4f5e-8016-694d753518f1 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] [instance: 7535b6dd-3ef8-4847-812d-f0a9208df287] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  1 09:45:30 compute-0 nova_compute[189491]: 2025-12-01 09:45:30.629 189495 DEBUG nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: 7535b6dd-3ef8-4847-812d-f0a9208df287] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 09:45:30 compute-0 nova_compute[189491]: 2025-12-01 09:45:30.638 189495 DEBUG nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: 7535b6dd-3ef8-4847-812d-f0a9208df287] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  1 09:45:30 compute-0 nova_compute[189491]: 2025-12-01 09:45:30.650 189495 DEBUG nova.virt.libvirt.driver [None req-2440282d-4f1c-4f5e-8016-694d753518f1 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] [instance: 7535b6dd-3ef8-4847-812d-f0a9208df287] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 09:45:30 compute-0 nova_compute[189491]: 2025-12-01 09:45:30.651 189495 DEBUG nova.virt.libvirt.driver [None req-2440282d-4f1c-4f5e-8016-694d753518f1 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] [instance: 7535b6dd-3ef8-4847-812d-f0a9208df287] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 09:45:30 compute-0 nova_compute[189491]: 2025-12-01 09:45:30.652 189495 DEBUG nova.virt.libvirt.driver [None req-2440282d-4f1c-4f5e-8016-694d753518f1 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] [instance: 7535b6dd-3ef8-4847-812d-f0a9208df287] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 09:45:30 compute-0 nova_compute[189491]: 2025-12-01 09:45:30.652 189495 DEBUG nova.virt.libvirt.driver [None req-2440282d-4f1c-4f5e-8016-694d753518f1 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] [instance: 7535b6dd-3ef8-4847-812d-f0a9208df287] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 09:45:30 compute-0 nova_compute[189491]: 2025-12-01 09:45:30.653 189495 DEBUG nova.virt.libvirt.driver [None req-2440282d-4f1c-4f5e-8016-694d753518f1 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] [instance: 7535b6dd-3ef8-4847-812d-f0a9208df287] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 09:45:30 compute-0 nova_compute[189491]: 2025-12-01 09:45:30.654 189495 DEBUG nova.virt.libvirt.driver [None req-2440282d-4f1c-4f5e-8016-694d753518f1 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] [instance: 7535b6dd-3ef8-4847-812d-f0a9208df287] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 09:45:30 compute-0 nova_compute[189491]: 2025-12-01 09:45:30.660 189495 INFO nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: 7535b6dd-3ef8-4847-812d-f0a9208df287] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  1 09:45:30 compute-0 nova_compute[189491]: 2025-12-01 09:45:30.697 189495 INFO nova.compute.manager [None req-2440282d-4f1c-4f5e-8016-694d753518f1 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] [instance: 7535b6dd-3ef8-4847-812d-f0a9208df287] Took 6.90 seconds to spawn the instance on the hypervisor.#033[00m
Dec  1 09:45:30 compute-0 nova_compute[189491]: 2025-12-01 09:45:30.698 189495 DEBUG nova.compute.manager [None req-2440282d-4f1c-4f5e-8016-694d753518f1 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] [instance: 7535b6dd-3ef8-4847-812d-f0a9208df287] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 09:45:30 compute-0 nova_compute[189491]: 2025-12-01 09:45:30.772 189495 INFO nova.compute.manager [None req-2440282d-4f1c-4f5e-8016-694d753518f1 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] [instance: 7535b6dd-3ef8-4847-812d-f0a9208df287] Took 7.45 seconds to build instance.#033[00m
Dec  1 09:45:30 compute-0 nova_compute[189491]: 2025-12-01 09:45:30.795 189495 DEBUG oslo_concurrency.lockutils [None req-2440282d-4f1c-4f5e-8016-694d753518f1 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Lock "7535b6dd-3ef8-4847-812d-f0a9208df287" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.556s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:45:31 compute-0 openstack_network_exporter[205866]: ERROR   09:45:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 09:45:31 compute-0 openstack_network_exporter[205866]: ERROR   09:45:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:45:31 compute-0 openstack_network_exporter[205866]: ERROR   09:45:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:45:31 compute-0 openstack_network_exporter[205866]: ERROR   09:45:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 09:45:31 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:45:31 compute-0 openstack_network_exporter[205866]: ERROR   09:45:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 09:45:31 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:45:31 compute-0 podman[254208]: 2025-12-01 09:45:31.64476575 +0000 UTC m=+0.103879697 container health_status e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec  1 09:45:32 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:45:32.022 106659 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:2b:76', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'f6:fe:a3:90:0a:20'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 09:45:32 compute-0 nova_compute[189491]: 2025-12-01 09:45:32.029 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:45:32 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:45:32.032 106659 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  1 09:45:32 compute-0 nova_compute[189491]: 2025-12-01 09:45:32.663 189495 DEBUG nova.compute.manager [req-4002fed9-e2ad-42f6-aca1-ce7239f14b06 req-c266604f-b095-477f-aaeb-889a26b8428c ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 7535b6dd-3ef8-4847-812d-f0a9208df287] Received event network-vif-plugged-5f6c9141-b437-4ca0-bceb-99a3d14bb457 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 09:45:32 compute-0 nova_compute[189491]: 2025-12-01 09:45:32.663 189495 DEBUG oslo_concurrency.lockutils [req-4002fed9-e2ad-42f6-aca1-ce7239f14b06 req-c266604f-b095-477f-aaeb-889a26b8428c ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Acquiring lock "7535b6dd-3ef8-4847-812d-f0a9208df287-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:45:32 compute-0 nova_compute[189491]: 2025-12-01 09:45:32.663 189495 DEBUG oslo_concurrency.lockutils [req-4002fed9-e2ad-42f6-aca1-ce7239f14b06 req-c266604f-b095-477f-aaeb-889a26b8428c ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Lock "7535b6dd-3ef8-4847-812d-f0a9208df287-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:45:32 compute-0 nova_compute[189491]: 2025-12-01 09:45:32.664 189495 DEBUG oslo_concurrency.lockutils [req-4002fed9-e2ad-42f6-aca1-ce7239f14b06 req-c266604f-b095-477f-aaeb-889a26b8428c ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Lock "7535b6dd-3ef8-4847-812d-f0a9208df287-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:45:32 compute-0 nova_compute[189491]: 2025-12-01 09:45:32.664 189495 DEBUG nova.compute.manager [req-4002fed9-e2ad-42f6-aca1-ce7239f14b06 req-c266604f-b095-477f-aaeb-889a26b8428c ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 7535b6dd-3ef8-4847-812d-f0a9208df287] No waiting events found dispatching network-vif-plugged-5f6c9141-b437-4ca0-bceb-99a3d14bb457 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 09:45:32 compute-0 nova_compute[189491]: 2025-12-01 09:45:32.664 189495 WARNING nova.compute.manager [req-4002fed9-e2ad-42f6-aca1-ce7239f14b06 req-c266604f-b095-477f-aaeb-889a26b8428c ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 7535b6dd-3ef8-4847-812d-f0a9208df287] Received unexpected event network-vif-plugged-5f6c9141-b437-4ca0-bceb-99a3d14bb457 for instance with vm_state active and task_state None.#033[00m
Dec  1 09:45:32 compute-0 podman[254228]: 2025-12-01 09:45:32.719558935 +0000 UTC m=+0.092752366 container health_status dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  1 09:45:32 compute-0 podman[254229]: 2025-12-01 09:45:32.734781577 +0000 UTC m=+0.100975267 container health_status f2d8639519b361376780abb83cb5a5e341c4f412720fb5852b2a4d261a69c359 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, vcs-type=git, config_id=edpm, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, container_name=kepler, maintainer=Red Hat, Inc., managed_by=edpm_ansible, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, io.openshift.expose-services=)
Dec  1 09:45:33 compute-0 nova_compute[189491]: 2025-12-01 09:45:33.025 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:45:33 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:45:33.034 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=203a4433-d8f4-4d80-8084-548a6d57cd5d, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:45:33 compute-0 nova_compute[189491]: 2025-12-01 09:45:33.177 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:45:34 compute-0 ovn_controller[97794]: 2025-12-01T09:45:34Z|00130|memory|INFO|peak resident set size grew 53% in last 2798.4 seconds, from 16256 kB to 24872 kB
Dec  1 09:45:34 compute-0 ovn_controller[97794]: 2025-12-01T09:45:34Z|00131|memory|INFO|idl-cells-OVN_Southbound:11018 idl-cells-Open_vSwitch:984 if_status_mgr_ifaces_state_usage-KB:1 if_status_mgr_ifaces_usage-KB:1 lflow-cache-entries-cache-expr:379 lflow-cache-entries-cache-matches:296 lflow-cache-size-KB:1604 local_datapath_usage-KB:3 ofctrl_desired_flow_usage-KB:706 ofctrl_installed_flow_usage-KB:514 ofctrl_sb_flow_ref_usage-KB:267
Dec  1 09:45:35 compute-0 nova_compute[189491]: 2025-12-01 09:45:35.190 189495 DEBUG nova.compute.manager [req-5382c2a2-afdd-4b70-9080-c91293ff75b1 req-0eff6ac2-c8c2-4b9d-ab96-275ad985ddd6 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 7535b6dd-3ef8-4847-812d-f0a9208df287] Received event network-changed-5f6c9141-b437-4ca0-bceb-99a3d14bb457 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 09:45:35 compute-0 nova_compute[189491]: 2025-12-01 09:45:35.191 189495 DEBUG nova.compute.manager [req-5382c2a2-afdd-4b70-9080-c91293ff75b1 req-0eff6ac2-c8c2-4b9d-ab96-275ad985ddd6 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 7535b6dd-3ef8-4847-812d-f0a9208df287] Refreshing instance network info cache due to event network-changed-5f6c9141-b437-4ca0-bceb-99a3d14bb457. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  1 09:45:35 compute-0 nova_compute[189491]: 2025-12-01 09:45:35.191 189495 DEBUG oslo_concurrency.lockutils [req-5382c2a2-afdd-4b70-9080-c91293ff75b1 req-0eff6ac2-c8c2-4b9d-ab96-275ad985ddd6 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Acquiring lock "refresh_cache-7535b6dd-3ef8-4847-812d-f0a9208df287" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 09:45:35 compute-0 nova_compute[189491]: 2025-12-01 09:45:35.192 189495 DEBUG oslo_concurrency.lockutils [req-5382c2a2-afdd-4b70-9080-c91293ff75b1 req-0eff6ac2-c8c2-4b9d-ab96-275ad985ddd6 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Acquired lock "refresh_cache-7535b6dd-3ef8-4847-812d-f0a9208df287" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 09:45:35 compute-0 nova_compute[189491]: 2025-12-01 09:45:35.192 189495 DEBUG nova.network.neutron [req-5382c2a2-afdd-4b70-9080-c91293ff75b1 req-0eff6ac2-c8c2-4b9d-ab96-275ad985ddd6 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 7535b6dd-3ef8-4847-812d-f0a9208df287] Refreshing network info cache for port 5f6c9141-b437-4ca0-bceb-99a3d14bb457 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  1 09:45:38 compute-0 nova_compute[189491]: 2025-12-01 09:45:38.034 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:45:38 compute-0 nova_compute[189491]: 2025-12-01 09:45:38.179 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:45:38 compute-0 nova_compute[189491]: 2025-12-01 09:45:38.658 189495 DEBUG oslo_concurrency.lockutils [None req-a688fbcd-9610-4fd2-8f9b-8785abcbf0c4 b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] Acquiring lock "b6b22803-169f-45be-85f7-058bfa3f2970" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:45:38 compute-0 nova_compute[189491]: 2025-12-01 09:45:38.659 189495 DEBUG oslo_concurrency.lockutils [None req-a688fbcd-9610-4fd2-8f9b-8785abcbf0c4 b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] Lock "b6b22803-169f-45be-85f7-058bfa3f2970" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:45:38 compute-0 nova_compute[189491]: 2025-12-01 09:45:38.680 189495 DEBUG nova.compute.manager [None req-a688fbcd-9610-4fd2-8f9b-8785abcbf0c4 b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] [instance: b6b22803-169f-45be-85f7-058bfa3f2970] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  1 09:45:38 compute-0 nova_compute[189491]: 2025-12-01 09:45:38.715 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:45:38 compute-0 nova_compute[189491]: 2025-12-01 09:45:38.716 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 09:45:38 compute-0 nova_compute[189491]: 2025-12-01 09:45:38.717 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 09:45:38 compute-0 nova_compute[189491]: 2025-12-01 09:45:38.734 189495 DEBUG nova.network.neutron [req-5382c2a2-afdd-4b70-9080-c91293ff75b1 req-0eff6ac2-c8c2-4b9d-ab96-275ad985ddd6 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 7535b6dd-3ef8-4847-812d-f0a9208df287] Updated VIF entry in instance network info cache for port 5f6c9141-b437-4ca0-bceb-99a3d14bb457. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  1 09:45:38 compute-0 nova_compute[189491]: 2025-12-01 09:45:38.736 189495 DEBUG nova.network.neutron [req-5382c2a2-afdd-4b70-9080-c91293ff75b1 req-0eff6ac2-c8c2-4b9d-ab96-275ad985ddd6 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 7535b6dd-3ef8-4847-812d-f0a9208df287] Updating instance_info_cache with network_info: [{"id": "5f6c9141-b437-4ca0-bceb-99a3d14bb457", "address": "fa:16:3e:8c:34:1f", "network": {"id": "4f3e9b63-cba6-412e-ba07-d66a8b38af02", "bridge": "br-int", "label": "tempest-network-smoke--1085714181", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee60ff0d117e468aa42c7d39022568ea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5f6c9141-b4", "ovs_interfaceid": "5f6c9141-b437-4ca0-bceb-99a3d14bb457", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 09:45:38 compute-0 nova_compute[189491]: 2025-12-01 09:45:38.804 189495 DEBUG oslo_concurrency.lockutils [req-5382c2a2-afdd-4b70-9080-c91293ff75b1 req-0eff6ac2-c8c2-4b9d-ab96-275ad985ddd6 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Releasing lock "refresh_cache-7535b6dd-3ef8-4847-812d-f0a9208df287" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 09:45:38 compute-0 nova_compute[189491]: 2025-12-01 09:45:38.830 189495 DEBUG oslo_concurrency.lockutils [None req-a688fbcd-9610-4fd2-8f9b-8785abcbf0c4 b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:45:38 compute-0 nova_compute[189491]: 2025-12-01 09:45:38.831 189495 DEBUG oslo_concurrency.lockutils [None req-a688fbcd-9610-4fd2-8f9b-8785abcbf0c4 b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:45:38 compute-0 nova_compute[189491]: 2025-12-01 09:45:38.841 189495 DEBUG nova.virt.hardware [None req-a688fbcd-9610-4fd2-8f9b-8785abcbf0c4 b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  1 09:45:38 compute-0 nova_compute[189491]: 2025-12-01 09:45:38.841 189495 INFO nova.compute.claims [None req-a688fbcd-9610-4fd2-8f9b-8785abcbf0c4 b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] [instance: b6b22803-169f-45be-85f7-058bfa3f2970] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  1 09:45:38 compute-0 nova_compute[189491]: 2025-12-01 09:45:38.985 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "refresh_cache-b5a25e93-8e59-4459-a45e-2d1d2d486bbc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 09:45:38 compute-0 nova_compute[189491]: 2025-12-01 09:45:38.985 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquired lock "refresh_cache-b5a25e93-8e59-4459-a45e-2d1d2d486bbc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 09:45:38 compute-0 nova_compute[189491]: 2025-12-01 09:45:38.986 189495 DEBUG nova.network.neutron [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] [instance: b5a25e93-8e59-4459-a45e-2d1d2d486bbc] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  1 09:45:38 compute-0 nova_compute[189491]: 2025-12-01 09:45:38.986 189495 DEBUG nova.objects.instance [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b5a25e93-8e59-4459-a45e-2d1d2d486bbc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 09:45:39 compute-0 nova_compute[189491]: 2025-12-01 09:45:39.052 189495 DEBUG nova.compute.provider_tree [None req-a688fbcd-9610-4fd2-8f9b-8785abcbf0c4 b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] Inventory has not changed in ProviderTree for provider: 143c7fe7-af1f-477a-978c-6a994d785d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 09:45:39 compute-0 nova_compute[189491]: 2025-12-01 09:45:39.069 189495 DEBUG nova.scheduler.client.report [None req-a688fbcd-9610-4fd2-8f9b-8785abcbf0c4 b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] Inventory has not changed for provider 143c7fe7-af1f-477a-978c-6a994d785d98 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 09:45:39 compute-0 nova_compute[189491]: 2025-12-01 09:45:39.093 189495 DEBUG oslo_concurrency.lockutils [None req-a688fbcd-9610-4fd2-8f9b-8785abcbf0c4 b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.262s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:45:39 compute-0 nova_compute[189491]: 2025-12-01 09:45:39.093 189495 DEBUG nova.compute.manager [None req-a688fbcd-9610-4fd2-8f9b-8785abcbf0c4 b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] [instance: b6b22803-169f-45be-85f7-058bfa3f2970] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  1 09:45:39 compute-0 nova_compute[189491]: 2025-12-01 09:45:39.149 189495 DEBUG nova.compute.manager [None req-a688fbcd-9610-4fd2-8f9b-8785abcbf0c4 b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] [instance: b6b22803-169f-45be-85f7-058bfa3f2970] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  1 09:45:39 compute-0 nova_compute[189491]: 2025-12-01 09:45:39.150 189495 DEBUG nova.network.neutron [None req-a688fbcd-9610-4fd2-8f9b-8785abcbf0c4 b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] [instance: b6b22803-169f-45be-85f7-058bfa3f2970] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  1 09:45:39 compute-0 nova_compute[189491]: 2025-12-01 09:45:39.177 189495 INFO nova.virt.libvirt.driver [None req-a688fbcd-9610-4fd2-8f9b-8785abcbf0c4 b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] [instance: b6b22803-169f-45be-85f7-058bfa3f2970] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  1 09:45:39 compute-0 nova_compute[189491]: 2025-12-01 09:45:39.196 189495 DEBUG nova.compute.manager [None req-a688fbcd-9610-4fd2-8f9b-8785abcbf0c4 b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] [instance: b6b22803-169f-45be-85f7-058bfa3f2970] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  1 09:45:39 compute-0 nova_compute[189491]: 2025-12-01 09:45:39.291 189495 DEBUG nova.compute.manager [None req-a688fbcd-9610-4fd2-8f9b-8785abcbf0c4 b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] [instance: b6b22803-169f-45be-85f7-058bfa3f2970] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  1 09:45:39 compute-0 nova_compute[189491]: 2025-12-01 09:45:39.292 189495 DEBUG nova.virt.libvirt.driver [None req-a688fbcd-9610-4fd2-8f9b-8785abcbf0c4 b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] [instance: b6b22803-169f-45be-85f7-058bfa3f2970] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  1 09:45:39 compute-0 nova_compute[189491]: 2025-12-01 09:45:39.293 189495 INFO nova.virt.libvirt.driver [None req-a688fbcd-9610-4fd2-8f9b-8785abcbf0c4 b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] [instance: b6b22803-169f-45be-85f7-058bfa3f2970] Creating image(s)#033[00m
Dec  1 09:45:39 compute-0 nova_compute[189491]: 2025-12-01 09:45:39.294 189495 DEBUG oslo_concurrency.lockutils [None req-a688fbcd-9610-4fd2-8f9b-8785abcbf0c4 b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] Acquiring lock "/var/lib/nova/instances/b6b22803-169f-45be-85f7-058bfa3f2970/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:45:39 compute-0 nova_compute[189491]: 2025-12-01 09:45:39.294 189495 DEBUG oslo_concurrency.lockutils [None req-a688fbcd-9610-4fd2-8f9b-8785abcbf0c4 b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] Lock "/var/lib/nova/instances/b6b22803-169f-45be-85f7-058bfa3f2970/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:45:39 compute-0 nova_compute[189491]: 2025-12-01 09:45:39.295 189495 DEBUG oslo_concurrency.lockutils [None req-a688fbcd-9610-4fd2-8f9b-8785abcbf0c4 b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] Lock "/var/lib/nova/instances/b6b22803-169f-45be-85f7-058bfa3f2970/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:45:39 compute-0 nova_compute[189491]: 2025-12-01 09:45:39.311 189495 DEBUG oslo_concurrency.processutils [None req-a688fbcd-9610-4fd2-8f9b-8785abcbf0c4 b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bfffb0fe9ffb8885e11e7a8e92aeafe5ed4e87fd --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:45:39 compute-0 nova_compute[189491]: 2025-12-01 09:45:39.383 189495 DEBUG oslo_concurrency.processutils [None req-a688fbcd-9610-4fd2-8f9b-8785abcbf0c4 b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bfffb0fe9ffb8885e11e7a8e92aeafe5ed4e87fd --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:45:39 compute-0 nova_compute[189491]: 2025-12-01 09:45:39.385 189495 DEBUG oslo_concurrency.lockutils [None req-a688fbcd-9610-4fd2-8f9b-8785abcbf0c4 b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] Acquiring lock "bfffb0fe9ffb8885e11e7a8e92aeafe5ed4e87fd" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:45:39 compute-0 nova_compute[189491]: 2025-12-01 09:45:39.386 189495 DEBUG oslo_concurrency.lockutils [None req-a688fbcd-9610-4fd2-8f9b-8785abcbf0c4 b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] Lock "bfffb0fe9ffb8885e11e7a8e92aeafe5ed4e87fd" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:45:39 compute-0 nova_compute[189491]: 2025-12-01 09:45:39.412 189495 DEBUG oslo_concurrency.processutils [None req-a688fbcd-9610-4fd2-8f9b-8785abcbf0c4 b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bfffb0fe9ffb8885e11e7a8e92aeafe5ed4e87fd --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:45:39 compute-0 nova_compute[189491]: 2025-12-01 09:45:39.440 189495 DEBUG nova.policy [None req-a688fbcd-9610-4fd2-8f9b-8785abcbf0c4 b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b40ddefd6a0e437e95ddb1bc36d5ec0b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'db1d07a763fd4c1d806a7cf648ffae54', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  1 09:45:39 compute-0 nova_compute[189491]: 2025-12-01 09:45:39.485 189495 DEBUG oslo_concurrency.processutils [None req-a688fbcd-9610-4fd2-8f9b-8785abcbf0c4 b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bfffb0fe9ffb8885e11e7a8e92aeafe5ed4e87fd --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:45:39 compute-0 nova_compute[189491]: 2025-12-01 09:45:39.487 189495 DEBUG oslo_concurrency.processutils [None req-a688fbcd-9610-4fd2-8f9b-8785abcbf0c4 b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/bfffb0fe9ffb8885e11e7a8e92aeafe5ed4e87fd,backing_fmt=raw /var/lib/nova/instances/b6b22803-169f-45be-85f7-058bfa3f2970/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:45:39 compute-0 nova_compute[189491]: 2025-12-01 09:45:39.539 189495 DEBUG oslo_concurrency.processutils [None req-a688fbcd-9610-4fd2-8f9b-8785abcbf0c4 b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/bfffb0fe9ffb8885e11e7a8e92aeafe5ed4e87fd,backing_fmt=raw /var/lib/nova/instances/b6b22803-169f-45be-85f7-058bfa3f2970/disk 1073741824" returned: 0 in 0.052s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:45:39 compute-0 nova_compute[189491]: 2025-12-01 09:45:39.540 189495 DEBUG oslo_concurrency.lockutils [None req-a688fbcd-9610-4fd2-8f9b-8785abcbf0c4 b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] Lock "bfffb0fe9ffb8885e11e7a8e92aeafe5ed4e87fd" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.154s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:45:39 compute-0 nova_compute[189491]: 2025-12-01 09:45:39.541 189495 DEBUG oslo_concurrency.processutils [None req-a688fbcd-9610-4fd2-8f9b-8785abcbf0c4 b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bfffb0fe9ffb8885e11e7a8e92aeafe5ed4e87fd --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:45:39 compute-0 nova_compute[189491]: 2025-12-01 09:45:39.606 189495 DEBUG oslo_concurrency.processutils [None req-a688fbcd-9610-4fd2-8f9b-8785abcbf0c4 b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bfffb0fe9ffb8885e11e7a8e92aeafe5ed4e87fd --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:45:39 compute-0 nova_compute[189491]: 2025-12-01 09:45:39.608 189495 DEBUG nova.virt.disk.api [None req-a688fbcd-9610-4fd2-8f9b-8785abcbf0c4 b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] Checking if we can resize image /var/lib/nova/instances/b6b22803-169f-45be-85f7-058bfa3f2970/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Dec  1 09:45:39 compute-0 nova_compute[189491]: 2025-12-01 09:45:39.608 189495 DEBUG oslo_concurrency.processutils [None req-a688fbcd-9610-4fd2-8f9b-8785abcbf0c4 b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b22803-169f-45be-85f7-058bfa3f2970/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:45:39 compute-0 nova_compute[189491]: 2025-12-01 09:45:39.678 189495 DEBUG oslo_concurrency.processutils [None req-a688fbcd-9610-4fd2-8f9b-8785abcbf0c4 b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b22803-169f-45be-85f7-058bfa3f2970/disk --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:45:39 compute-0 nova_compute[189491]: 2025-12-01 09:45:39.679 189495 DEBUG nova.virt.disk.api [None req-a688fbcd-9610-4fd2-8f9b-8785abcbf0c4 b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] Cannot resize image /var/lib/nova/instances/b6b22803-169f-45be-85f7-058bfa3f2970/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Dec  1 09:45:39 compute-0 nova_compute[189491]: 2025-12-01 09:45:39.680 189495 DEBUG nova.objects.instance [None req-a688fbcd-9610-4fd2-8f9b-8785abcbf0c4 b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] Lazy-loading 'migration_context' on Instance uuid b6b22803-169f-45be-85f7-058bfa3f2970 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 09:45:39 compute-0 nova_compute[189491]: 2025-12-01 09:45:39.694 189495 DEBUG nova.virt.libvirt.driver [None req-a688fbcd-9610-4fd2-8f9b-8785abcbf0c4 b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] [instance: b6b22803-169f-45be-85f7-058bfa3f2970] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  1 09:45:39 compute-0 nova_compute[189491]: 2025-12-01 09:45:39.695 189495 DEBUG nova.virt.libvirt.driver [None req-a688fbcd-9610-4fd2-8f9b-8785abcbf0c4 b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] [instance: b6b22803-169f-45be-85f7-058bfa3f2970] Ensure instance console log exists: /var/lib/nova/instances/b6b22803-169f-45be-85f7-058bfa3f2970/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  1 09:45:39 compute-0 nova_compute[189491]: 2025-12-01 09:45:39.695 189495 DEBUG oslo_concurrency.lockutils [None req-a688fbcd-9610-4fd2-8f9b-8785abcbf0c4 b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:45:39 compute-0 nova_compute[189491]: 2025-12-01 09:45:39.696 189495 DEBUG oslo_concurrency.lockutils [None req-a688fbcd-9610-4fd2-8f9b-8785abcbf0c4 b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:45:39 compute-0 nova_compute[189491]: 2025-12-01 09:45:39.696 189495 DEBUG oslo_concurrency.lockutils [None req-a688fbcd-9610-4fd2-8f9b-8785abcbf0c4 b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:45:39 compute-0 nova_compute[189491]: 2025-12-01 09:45:39.866 189495 DEBUG oslo_concurrency.lockutils [None req-ef9ffe62-a50e-4722-b9e0-a7463d4d1251 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] Acquiring lock "b5a25e93-8e59-4459-a45e-2d1d2d486bbc" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:45:39 compute-0 nova_compute[189491]: 2025-12-01 09:45:39.868 189495 DEBUG oslo_concurrency.lockutils [None req-ef9ffe62-a50e-4722-b9e0-a7463d4d1251 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] Lock "b5a25e93-8e59-4459-a45e-2d1d2d486bbc" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:45:39 compute-0 nova_compute[189491]: 2025-12-01 09:45:39.868 189495 DEBUG oslo_concurrency.lockutils [None req-ef9ffe62-a50e-4722-b9e0-a7463d4d1251 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] Acquiring lock "b5a25e93-8e59-4459-a45e-2d1d2d486bbc-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:45:39 compute-0 nova_compute[189491]: 2025-12-01 09:45:39.869 189495 DEBUG oslo_concurrency.lockutils [None req-ef9ffe62-a50e-4722-b9e0-a7463d4d1251 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] Lock "b5a25e93-8e59-4459-a45e-2d1d2d486bbc-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:45:39 compute-0 nova_compute[189491]: 2025-12-01 09:45:39.869 189495 DEBUG oslo_concurrency.lockutils [None req-ef9ffe62-a50e-4722-b9e0-a7463d4d1251 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] Lock "b5a25e93-8e59-4459-a45e-2d1d2d486bbc-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:45:39 compute-0 nova_compute[189491]: 2025-12-01 09:45:39.870 189495 INFO nova.compute.manager [None req-ef9ffe62-a50e-4722-b9e0-a7463d4d1251 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] [instance: b5a25e93-8e59-4459-a45e-2d1d2d486bbc] Terminating instance#033[00m
Dec  1 09:45:39 compute-0 nova_compute[189491]: 2025-12-01 09:45:39.872 189495 DEBUG nova.compute.manager [None req-ef9ffe62-a50e-4722-b9e0-a7463d4d1251 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] [instance: b5a25e93-8e59-4459-a45e-2d1d2d486bbc] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  1 09:45:39 compute-0 kernel: tap9dc75317-7a (unregistering): left promiscuous mode
Dec  1 09:45:39 compute-0 NetworkManager[56318]: <info>  [1764582339.9063] device (tap9dc75317-7a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  1 09:45:39 compute-0 nova_compute[189491]: 2025-12-01 09:45:39.919 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:45:39 compute-0 ovn_controller[97794]: 2025-12-01T09:45:39Z|00132|binding|INFO|Releasing lport 9dc75317-7a9b-4763-9189-4ea68bfc3ccb from this chassis (sb_readonly=0)
Dec  1 09:45:39 compute-0 ovn_controller[97794]: 2025-12-01T09:45:39Z|00133|binding|INFO|Setting lport 9dc75317-7a9b-4763-9189-4ea68bfc3ccb down in Southbound
Dec  1 09:45:39 compute-0 ovn_controller[97794]: 2025-12-01T09:45:39Z|00134|binding|INFO|Removing iface tap9dc75317-7a ovn-installed in OVS
Dec  1 09:45:39 compute-0 nova_compute[189491]: 2025-12-01 09:45:39.926 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:45:39 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:45:39.939 106659 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:81:32:12 10.100.0.14'], port_security=['fa:16:3e:81:32:12 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'b5a25e93-8e59-4459-a45e-2d1d2d486bbc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-528d6fcc-4f6c-4000-b20b-6a6d9f6135ea', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a5fc8e7c1a854418b0a110cc22e69de0', 'neutron:revision_number': '6', 'neutron:security_group_ids': '72afbc16-616c-4679-8b1b-dcb1251c5132', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.190'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3074f1d2-6f44-4fa9-90f3-bc6399575f2a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff7f12c3670>], logical_port=9dc75317-7a9b-4763-9189-4ea68bfc3ccb) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff7f12c3670>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 09:45:39 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:45:39.941 106659 INFO neutron.agent.ovn.metadata.agent [-] Port 9dc75317-7a9b-4763-9189-4ea68bfc3ccb in datapath 528d6fcc-4f6c-4000-b20b-6a6d9f6135ea unbound from our chassis#033[00m
Dec  1 09:45:39 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:45:39.943 106659 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 528d6fcc-4f6c-4000-b20b-6a6d9f6135ea, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  1 09:45:39 compute-0 nova_compute[189491]: 2025-12-01 09:45:39.945 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:45:39 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:45:39.946 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[a31d6210-9643-4a7d-a1c5-44513fba0f60]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:45:39 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:45:39.947 106659 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-528d6fcc-4f6c-4000-b20b-6a6d9f6135ea namespace which is not needed anymore#033[00m
Dec  1 09:45:39 compute-0 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d00000008.scope: Deactivated successfully.
Dec  1 09:45:39 compute-0 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d00000008.scope: Consumed 42.993s CPU time.
Dec  1 09:45:39 compute-0 systemd-machined[155812]: Machine qemu-10-instance-00000008 terminated.
Dec  1 09:45:40 compute-0 podman[254287]: 2025-12-01 09:45:40.058640148 +0000 UTC m=+0.127008013 container health_status 110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, architecture=x86_64, io.openshift.expose-services=, io.buildah.version=1.33.7, release=1755695350, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, config_id=edpm, distribution-scope=public, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers)
Dec  1 09:45:40 compute-0 podman[254290]: 2025-12-01 09:45:40.064632274 +0000 UTC m=+0.119423437 container health_status f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  1 09:45:40 compute-0 nova_compute[189491]: 2025-12-01 09:45:40.143 189495 INFO nova.virt.libvirt.driver [-] [instance: b5a25e93-8e59-4459-a45e-2d1d2d486bbc] Instance destroyed successfully.#033[00m
Dec  1 09:45:40 compute-0 nova_compute[189491]: 2025-12-01 09:45:40.144 189495 DEBUG nova.objects.instance [None req-ef9ffe62-a50e-4722-b9e0-a7463d4d1251 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] Lazy-loading 'resources' on Instance uuid b5a25e93-8e59-4459-a45e-2d1d2d486bbc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 09:45:40 compute-0 nova_compute[189491]: 2025-12-01 09:45:40.169 189495 DEBUG nova.virt.libvirt.vif [None req-ef9ffe62-a50e-4722-b9e0-a7463d4d1251 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-01T09:43:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-2131740452',display_name='tempest-ServerActionsTestJSON-server-2131740452',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-2131740452',id=8,image_ref='7ddeffd1-d06f-4a46-9e41-114974daa90e',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCQFUVYl1Xqq2gIQN4/eCJ8cnpGKeD2gZ7u/gkHTzBRwJJoku8v2NGbkC1lQIa8TB9NaZUcsSyfv1koauiYvXUFGYORBUpCcLDSn5ClA7+eTQ5bJXZBZqJiWDZmhR8SgRA==',key_name='tempest-keypair-1047797503',keypairs=<?>,launch_index=0,launched_at=2025-12-01T09:43:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a5fc8e7c1a854418b0a110cc22e69de0',ramdisk_id='',reservation_id='r-k3gqld7r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7ddeffd1-d06f-4a46-9e41-114974daa90e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-253829526',owner_user_name='tempest-ServerActionsTestJSON-253829526-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-01T09:44:33Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7f215f81d0ab4d1fb34e21bf69e390fe',uuid=b5a25e93-8e59-4459-a45e-2d1d2d486bbc,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9dc75317-7a9b-4763-9189-4ea68bfc3ccb", "address": "fa:16:3e:81:32:12", "network": {"id": "528d6fcc-4f6c-4000-b20b-6a6d9f6135ea", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1736415669-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a5fc8e7c1a854418b0a110cc22e69de0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9dc75317-7a", "ovs_interfaceid": "9dc75317-7a9b-4763-9189-4ea68bfc3ccb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  1 09:45:40 compute-0 nova_compute[189491]: 2025-12-01 09:45:40.170 189495 DEBUG nova.network.os_vif_util [None req-ef9ffe62-a50e-4722-b9e0-a7463d4d1251 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] Converting VIF {"id": "9dc75317-7a9b-4763-9189-4ea68bfc3ccb", "address": "fa:16:3e:81:32:12", "network": {"id": "528d6fcc-4f6c-4000-b20b-6a6d9f6135ea", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1736415669-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a5fc8e7c1a854418b0a110cc22e69de0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9dc75317-7a", "ovs_interfaceid": "9dc75317-7a9b-4763-9189-4ea68bfc3ccb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 09:45:40 compute-0 nova_compute[189491]: 2025-12-01 09:45:40.170 189495 DEBUG nova.network.os_vif_util [None req-ef9ffe62-a50e-4722-b9e0-a7463d4d1251 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:81:32:12,bridge_name='br-int',has_traffic_filtering=True,id=9dc75317-7a9b-4763-9189-4ea68bfc3ccb,network=Network(528d6fcc-4f6c-4000-b20b-6a6d9f6135ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9dc75317-7a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 09:45:40 compute-0 nova_compute[189491]: 2025-12-01 09:45:40.171 189495 DEBUG os_vif [None req-ef9ffe62-a50e-4722-b9e0-a7463d4d1251 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:81:32:12,bridge_name='br-int',has_traffic_filtering=True,id=9dc75317-7a9b-4763-9189-4ea68bfc3ccb,network=Network(528d6fcc-4f6c-4000-b20b-6a6d9f6135ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9dc75317-7a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  1 09:45:40 compute-0 nova_compute[189491]: 2025-12-01 09:45:40.173 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:45:40 compute-0 nova_compute[189491]: 2025-12-01 09:45:40.173 189495 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9dc75317-7a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:45:40 compute-0 nova_compute[189491]: 2025-12-01 09:45:40.177 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:45:40 compute-0 nova_compute[189491]: 2025-12-01 09:45:40.179 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  1 09:45:40 compute-0 neutron-haproxy-ovnmeta-528d6fcc-4f6c-4000-b20b-6a6d9f6135ea[253424]: [NOTICE]   (253428) : haproxy version is 2.8.14-c23fe91
Dec  1 09:45:40 compute-0 neutron-haproxy-ovnmeta-528d6fcc-4f6c-4000-b20b-6a6d9f6135ea[253424]: [NOTICE]   (253428) : path to executable is /usr/sbin/haproxy
Dec  1 09:45:40 compute-0 neutron-haproxy-ovnmeta-528d6fcc-4f6c-4000-b20b-6a6d9f6135ea[253424]: [WARNING]  (253428) : Exiting Master process...
Dec  1 09:45:40 compute-0 neutron-haproxy-ovnmeta-528d6fcc-4f6c-4000-b20b-6a6d9f6135ea[253424]: [WARNING]  (253428) : Exiting Master process...
Dec  1 09:45:40 compute-0 nova_compute[189491]: 2025-12-01 09:45:40.185 189495 INFO os_vif [None req-ef9ffe62-a50e-4722-b9e0-a7463d4d1251 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:81:32:12,bridge_name='br-int',has_traffic_filtering=True,id=9dc75317-7a9b-4763-9189-4ea68bfc3ccb,network=Network(528d6fcc-4f6c-4000-b20b-6a6d9f6135ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9dc75317-7a')#033[00m
Dec  1 09:45:40 compute-0 nova_compute[189491]: 2025-12-01 09:45:40.185 189495 INFO nova.virt.libvirt.driver [None req-ef9ffe62-a50e-4722-b9e0-a7463d4d1251 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] [instance: b5a25e93-8e59-4459-a45e-2d1d2d486bbc] Deleting instance files /var/lib/nova/instances/b5a25e93-8e59-4459-a45e-2d1d2d486bbc_del#033[00m
Dec  1 09:45:40 compute-0 nova_compute[189491]: 2025-12-01 09:45:40.186 189495 INFO nova.virt.libvirt.driver [None req-ef9ffe62-a50e-4722-b9e0-a7463d4d1251 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] [instance: b5a25e93-8e59-4459-a45e-2d1d2d486bbc] Deletion of /var/lib/nova/instances/b5a25e93-8e59-4459-a45e-2d1d2d486bbc_del complete#033[00m
Dec  1 09:45:40 compute-0 neutron-haproxy-ovnmeta-528d6fcc-4f6c-4000-b20b-6a6d9f6135ea[253424]: [ALERT]    (253428) : Current worker (253430) exited with code 143 (Terminated)
Dec  1 09:45:40 compute-0 neutron-haproxy-ovnmeta-528d6fcc-4f6c-4000-b20b-6a6d9f6135ea[253424]: [WARNING]  (253428) : All workers exited. Exiting... (0)
Dec  1 09:45:40 compute-0 systemd[1]: libpod-22ae50d543af2fea44af619bbd6caa1db28d45622bb6f0b1e5daf7e0c1cd9181.scope: Deactivated successfully.
Dec  1 09:45:40 compute-0 podman[254346]: 2025-12-01 09:45:40.208838995 +0000 UTC m=+0.130018716 container died 22ae50d543af2fea44af619bbd6caa1db28d45622bb6f0b1e5daf7e0c1cd9181 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-528d6fcc-4f6c-4000-b20b-6a6d9f6135ea, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Dec  1 09:45:40 compute-0 nova_compute[189491]: 2025-12-01 09:45:40.220 189495 DEBUG nova.compute.manager [req-d572196c-5bf0-420b-9de7-591cbeff860e req-db893eb7-077f-4a6b-9b5b-02477d119518 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: b5a25e93-8e59-4459-a45e-2d1d2d486bbc] Received event network-vif-unplugged-9dc75317-7a9b-4763-9189-4ea68bfc3ccb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 09:45:40 compute-0 nova_compute[189491]: 2025-12-01 09:45:40.221 189495 DEBUG oslo_concurrency.lockutils [req-d572196c-5bf0-420b-9de7-591cbeff860e req-db893eb7-077f-4a6b-9b5b-02477d119518 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Acquiring lock "b5a25e93-8e59-4459-a45e-2d1d2d486bbc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:45:40 compute-0 nova_compute[189491]: 2025-12-01 09:45:40.222 189495 DEBUG oslo_concurrency.lockutils [req-d572196c-5bf0-420b-9de7-591cbeff860e req-db893eb7-077f-4a6b-9b5b-02477d119518 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Lock "b5a25e93-8e59-4459-a45e-2d1d2d486bbc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:45:40 compute-0 nova_compute[189491]: 2025-12-01 09:45:40.222 189495 DEBUG oslo_concurrency.lockutils [req-d572196c-5bf0-420b-9de7-591cbeff860e req-db893eb7-077f-4a6b-9b5b-02477d119518 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Lock "b5a25e93-8e59-4459-a45e-2d1d2d486bbc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:45:40 compute-0 nova_compute[189491]: 2025-12-01 09:45:40.222 189495 DEBUG nova.compute.manager [req-d572196c-5bf0-420b-9de7-591cbeff860e req-db893eb7-077f-4a6b-9b5b-02477d119518 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: b5a25e93-8e59-4459-a45e-2d1d2d486bbc] No waiting events found dispatching network-vif-unplugged-9dc75317-7a9b-4763-9189-4ea68bfc3ccb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 09:45:40 compute-0 nova_compute[189491]: 2025-12-01 09:45:40.222 189495 DEBUG nova.compute.manager [req-d572196c-5bf0-420b-9de7-591cbeff860e req-db893eb7-077f-4a6b-9b5b-02477d119518 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: b5a25e93-8e59-4459-a45e-2d1d2d486bbc] Received event network-vif-unplugged-9dc75317-7a9b-4763-9189-4ea68bfc3ccb for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  1 09:45:40 compute-0 nova_compute[189491]: 2025-12-01 09:45:40.247 189495 INFO nova.compute.manager [None req-ef9ffe62-a50e-4722-b9e0-a7463d4d1251 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] [instance: b5a25e93-8e59-4459-a45e-2d1d2d486bbc] Took 0.37 seconds to destroy the instance on the hypervisor.#033[00m
Dec  1 09:45:40 compute-0 nova_compute[189491]: 2025-12-01 09:45:40.247 189495 DEBUG oslo.service.loopingcall [None req-ef9ffe62-a50e-4722-b9e0-a7463d4d1251 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  1 09:45:40 compute-0 nova_compute[189491]: 2025-12-01 09:45:40.247 189495 DEBUG nova.compute.manager [-] [instance: b5a25e93-8e59-4459-a45e-2d1d2d486bbc] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  1 09:45:40 compute-0 nova_compute[189491]: 2025-12-01 09:45:40.248 189495 DEBUG nova.network.neutron [-] [instance: b5a25e93-8e59-4459-a45e-2d1d2d486bbc] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  1 09:45:40 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-22ae50d543af2fea44af619bbd6caa1db28d45622bb6f0b1e5daf7e0c1cd9181-userdata-shm.mount: Deactivated successfully.
Dec  1 09:45:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-ac9ac3ac428e39bbc7dc4365e0bef6aa8988c9e272e253bf37016ccb16595493-merged.mount: Deactivated successfully.
Dec  1 09:45:40 compute-0 podman[254346]: 2025-12-01 09:45:40.267291593 +0000 UTC m=+0.188471314 container cleanup 22ae50d543af2fea44af619bbd6caa1db28d45622bb6f0b1e5daf7e0c1cd9181 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-528d6fcc-4f6c-4000-b20b-6a6d9f6135ea, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  1 09:45:40 compute-0 systemd[1]: libpod-conmon-22ae50d543af2fea44af619bbd6caa1db28d45622bb6f0b1e5daf7e0c1cd9181.scope: Deactivated successfully.
Dec  1 09:45:40 compute-0 nova_compute[189491]: 2025-12-01 09:45:40.357 189495 DEBUG nova.network.neutron [None req-a688fbcd-9610-4fd2-8f9b-8785abcbf0c4 b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] [instance: b6b22803-169f-45be-85f7-058bfa3f2970] Successfully created port: 05122117-0522-4844-80d6-4425d6fae978 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  1 09:45:40 compute-0 podman[254387]: 2025-12-01 09:45:40.382041895 +0000 UTC m=+0.087325554 container remove 22ae50d543af2fea44af619bbd6caa1db28d45622bb6f0b1e5daf7e0c1cd9181 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-528d6fcc-4f6c-4000-b20b-6a6d9f6135ea, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Dec  1 09:45:40 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:45:40.392 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[f95e2b03-2f34-4b4b-8ad3-7e60c2b71a8c]: (4, ('Mon Dec  1 09:45:40 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-528d6fcc-4f6c-4000-b20b-6a6d9f6135ea (22ae50d543af2fea44af619bbd6caa1db28d45622bb6f0b1e5daf7e0c1cd9181)\n22ae50d543af2fea44af619bbd6caa1db28d45622bb6f0b1e5daf7e0c1cd9181\nMon Dec  1 09:45:40 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-528d6fcc-4f6c-4000-b20b-6a6d9f6135ea (22ae50d543af2fea44af619bbd6caa1db28d45622bb6f0b1e5daf7e0c1cd9181)\n22ae50d543af2fea44af619bbd6caa1db28d45622bb6f0b1e5daf7e0c1cd9181\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:45:40 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:45:40.399 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[45221987-0f05-419f-9aa4-e77d617adfe6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:45:40 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:45:40.401 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap528d6fcc-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:45:40 compute-0 nova_compute[189491]: 2025-12-01 09:45:40.404 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:45:40 compute-0 kernel: tap528d6fcc-40: left promiscuous mode
Dec  1 09:45:40 compute-0 nova_compute[189491]: 2025-12-01 09:45:40.422 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:45:40 compute-0 nova_compute[189491]: 2025-12-01 09:45:40.426 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:45:40 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:45:40.430 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[b170a187-8176-4dc8-a3da-5445f24d7138]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:45:40 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:45:40.445 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[50422aa8-0851-4448-88df-854c0eafb905]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:45:40 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:45:40.448 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[7fda029a-2fda-45ef-85ec-87c5d6259263]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:45:40 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:45:40.468 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[95f97be7-fd58-4d83-bc42-6269f50e4735]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 551216, 'reachable_time': 28216, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 254402, 'error': None, 'target': 'ovnmeta-528d6fcc-4f6c-4000-b20b-6a6d9f6135ea', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:45:40 compute-0 systemd[1]: run-netns-ovnmeta\x2d528d6fcc\x2d4f6c\x2d4000\x2db20b\x2d6a6d9f6135ea.mount: Deactivated successfully.
Dec  1 09:45:40 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:45:40.474 106797 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-528d6fcc-4f6c-4000-b20b-6a6d9f6135ea deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  1 09:45:40 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:45:40.474 106797 DEBUG oslo.privsep.daemon [-] privsep: reply[3eb0f525-00c5-4eff-b53a-e7e85ba42a9f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:45:41 compute-0 nova_compute[189491]: 2025-12-01 09:45:41.081 189495 DEBUG nova.network.neutron [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] [instance: b5a25e93-8e59-4459-a45e-2d1d2d486bbc] Updating instance_info_cache with network_info: [{"id": "9dc75317-7a9b-4763-9189-4ea68bfc3ccb", "address": "fa:16:3e:81:32:12", "network": {"id": "528d6fcc-4f6c-4000-b20b-6a6d9f6135ea", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1736415669-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a5fc8e7c1a854418b0a110cc22e69de0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9dc75317-7a", "ovs_interfaceid": "9dc75317-7a9b-4763-9189-4ea68bfc3ccb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 09:45:41 compute-0 nova_compute[189491]: 2025-12-01 09:45:41.102 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Releasing lock "refresh_cache-b5a25e93-8e59-4459-a45e-2d1d2d486bbc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 09:45:41 compute-0 nova_compute[189491]: 2025-12-01 09:45:41.102 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] [instance: b5a25e93-8e59-4459-a45e-2d1d2d486bbc] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  1 09:45:41 compute-0 nova_compute[189491]: 2025-12-01 09:45:41.801 189495 DEBUG nova.network.neutron [-] [instance: b5a25e93-8e59-4459-a45e-2d1d2d486bbc] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 09:45:41 compute-0 nova_compute[189491]: 2025-12-01 09:45:41.829 189495 INFO nova.compute.manager [-] [instance: b5a25e93-8e59-4459-a45e-2d1d2d486bbc] Took 1.58 seconds to deallocate network for instance.#033[00m
Dec  1 09:45:41 compute-0 nova_compute[189491]: 2025-12-01 09:45:41.886 189495 DEBUG oslo_concurrency.lockutils [None req-ef9ffe62-a50e-4722-b9e0-a7463d4d1251 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:45:41 compute-0 nova_compute[189491]: 2025-12-01 09:45:41.887 189495 DEBUG oslo_concurrency.lockutils [None req-ef9ffe62-a50e-4722-b9e0-a7463d4d1251 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:45:42 compute-0 nova_compute[189491]: 2025-12-01 09:45:42.033 189495 DEBUG nova.compute.manager [req-7c6b166a-09a7-4d40-bd3c-2bfafa25f43f req-df249946-89f5-46eb-a200-0423b7a5ffad ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: b5a25e93-8e59-4459-a45e-2d1d2d486bbc] Received event network-vif-deleted-9dc75317-7a9b-4763-9189-4ea68bfc3ccb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 09:45:42 compute-0 nova_compute[189491]: 2025-12-01 09:45:42.036 189495 DEBUG nova.compute.provider_tree [None req-ef9ffe62-a50e-4722-b9e0-a7463d4d1251 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] Inventory has not changed in ProviderTree for provider: 143c7fe7-af1f-477a-978c-6a994d785d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 09:45:42 compute-0 nova_compute[189491]: 2025-12-01 09:45:42.050 189495 DEBUG nova.scheduler.client.report [None req-ef9ffe62-a50e-4722-b9e0-a7463d4d1251 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] Inventory has not changed for provider 143c7fe7-af1f-477a-978c-6a994d785d98 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 09:45:42 compute-0 nova_compute[189491]: 2025-12-01 09:45:42.101 189495 DEBUG oslo_concurrency.lockutils [None req-ef9ffe62-a50e-4722-b9e0-a7463d4d1251 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.214s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:45:42 compute-0 nova_compute[189491]: 2025-12-01 09:45:42.128 189495 INFO nova.scheduler.client.report [None req-ef9ffe62-a50e-4722-b9e0-a7463d4d1251 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] Deleted allocations for instance b5a25e93-8e59-4459-a45e-2d1d2d486bbc#033[00m
Dec  1 09:45:42 compute-0 nova_compute[189491]: 2025-12-01 09:45:42.202 189495 DEBUG oslo_concurrency.lockutils [None req-ef9ffe62-a50e-4722-b9e0-a7463d4d1251 7f215f81d0ab4d1fb34e21bf69e390fe a5fc8e7c1a854418b0a110cc22e69de0 - - default default] Lock "b5a25e93-8e59-4459-a45e-2d1d2d486bbc" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.334s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:45:42 compute-0 nova_compute[189491]: 2025-12-01 09:45:42.530 189495 DEBUG nova.compute.manager [req-4cb11157-98a6-472d-a939-a8efaeac9d98 req-f342a09b-992b-4892-bd7e-45f75c48a2d6 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: b5a25e93-8e59-4459-a45e-2d1d2d486bbc] Received event network-vif-plugged-9dc75317-7a9b-4763-9189-4ea68bfc3ccb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 09:45:42 compute-0 nova_compute[189491]: 2025-12-01 09:45:42.530 189495 DEBUG oslo_concurrency.lockutils [req-4cb11157-98a6-472d-a939-a8efaeac9d98 req-f342a09b-992b-4892-bd7e-45f75c48a2d6 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Acquiring lock "b5a25e93-8e59-4459-a45e-2d1d2d486bbc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:45:42 compute-0 nova_compute[189491]: 2025-12-01 09:45:42.531 189495 DEBUG oslo_concurrency.lockutils [req-4cb11157-98a6-472d-a939-a8efaeac9d98 req-f342a09b-992b-4892-bd7e-45f75c48a2d6 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Lock "b5a25e93-8e59-4459-a45e-2d1d2d486bbc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:45:42 compute-0 nova_compute[189491]: 2025-12-01 09:45:42.531 189495 DEBUG oslo_concurrency.lockutils [req-4cb11157-98a6-472d-a939-a8efaeac9d98 req-f342a09b-992b-4892-bd7e-45f75c48a2d6 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Lock "b5a25e93-8e59-4459-a45e-2d1d2d486bbc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:45:42 compute-0 nova_compute[189491]: 2025-12-01 09:45:42.531 189495 DEBUG nova.compute.manager [req-4cb11157-98a6-472d-a939-a8efaeac9d98 req-f342a09b-992b-4892-bd7e-45f75c48a2d6 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: b5a25e93-8e59-4459-a45e-2d1d2d486bbc] No waiting events found dispatching network-vif-plugged-9dc75317-7a9b-4763-9189-4ea68bfc3ccb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 09:45:42 compute-0 nova_compute[189491]: 2025-12-01 09:45:42.531 189495 WARNING nova.compute.manager [req-4cb11157-98a6-472d-a939-a8efaeac9d98 req-f342a09b-992b-4892-bd7e-45f75c48a2d6 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: b5a25e93-8e59-4459-a45e-2d1d2d486bbc] Received unexpected event network-vif-plugged-9dc75317-7a9b-4763-9189-4ea68bfc3ccb for instance with vm_state deleted and task_state None.#033[00m
Dec  1 09:45:42 compute-0 nova_compute[189491]: 2025-12-01 09:45:42.935 189495 DEBUG nova.network.neutron [None req-a688fbcd-9610-4fd2-8f9b-8785abcbf0c4 b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] [instance: b6b22803-169f-45be-85f7-058bfa3f2970] Successfully updated port: 05122117-0522-4844-80d6-4425d6fae978 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  1 09:45:42 compute-0 nova_compute[189491]: 2025-12-01 09:45:42.953 189495 DEBUG oslo_concurrency.lockutils [None req-a688fbcd-9610-4fd2-8f9b-8785abcbf0c4 b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] Acquiring lock "refresh_cache-b6b22803-169f-45be-85f7-058bfa3f2970" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 09:45:42 compute-0 nova_compute[189491]: 2025-12-01 09:45:42.953 189495 DEBUG oslo_concurrency.lockutils [None req-a688fbcd-9610-4fd2-8f9b-8785abcbf0c4 b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] Acquired lock "refresh_cache-b6b22803-169f-45be-85f7-058bfa3f2970" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 09:45:42 compute-0 nova_compute[189491]: 2025-12-01 09:45:42.954 189495 DEBUG nova.network.neutron [None req-a688fbcd-9610-4fd2-8f9b-8785abcbf0c4 b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] [instance: b6b22803-169f-45be-85f7-058bfa3f2970] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  1 09:45:43 compute-0 nova_compute[189491]: 2025-12-01 09:45:43.135 189495 DEBUG nova.network.neutron [None req-a688fbcd-9610-4fd2-8f9b-8785abcbf0c4 b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] [instance: b6b22803-169f-45be-85f7-058bfa3f2970] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  1 09:45:43 compute-0 nova_compute[189491]: 2025-12-01 09:45:43.182 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:45:43 compute-0 ovn_controller[97794]: 2025-12-01T09:45:43Z|00019|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:50:a8:e2 10.100.0.156
Dec  1 09:45:43 compute-0 ovn_controller[97794]: 2025-12-01T09:45:43Z|00020|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:50:a8:e2 10.100.0.156
Dec  1 09:45:43 compute-0 nova_compute[189491]: 2025-12-01 09:45:43.995 189495 DEBUG nova.network.neutron [None req-a688fbcd-9610-4fd2-8f9b-8785abcbf0c4 b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] [instance: b6b22803-169f-45be-85f7-058bfa3f2970] Updating instance_info_cache with network_info: [{"id": "05122117-0522-4844-80d6-4425d6fae978", "address": "fa:16:3e:af:65:c9", "network": {"id": "9a42964e-1108-49cc-ac3f-41165766e2ed", "bridge": "br-int", "label": "tempest-TestServerBasicOps-201869635-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db1d07a763fd4c1d806a7cf648ffae54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05122117-05", "ovs_interfaceid": "05122117-0522-4844-80d6-4425d6fae978", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 09:45:44 compute-0 nova_compute[189491]: 2025-12-01 09:45:44.017 189495 DEBUG oslo_concurrency.lockutils [None req-a688fbcd-9610-4fd2-8f9b-8785abcbf0c4 b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] Releasing lock "refresh_cache-b6b22803-169f-45be-85f7-058bfa3f2970" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 09:45:44 compute-0 nova_compute[189491]: 2025-12-01 09:45:44.017 189495 DEBUG nova.compute.manager [None req-a688fbcd-9610-4fd2-8f9b-8785abcbf0c4 b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] [instance: b6b22803-169f-45be-85f7-058bfa3f2970] Instance network_info: |[{"id": "05122117-0522-4844-80d6-4425d6fae978", "address": "fa:16:3e:af:65:c9", "network": {"id": "9a42964e-1108-49cc-ac3f-41165766e2ed", "bridge": "br-int", "label": "tempest-TestServerBasicOps-201869635-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db1d07a763fd4c1d806a7cf648ffae54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05122117-05", "ovs_interfaceid": "05122117-0522-4844-80d6-4425d6fae978", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  1 09:45:44 compute-0 nova_compute[189491]: 2025-12-01 09:45:44.020 189495 DEBUG nova.virt.libvirt.driver [None req-a688fbcd-9610-4fd2-8f9b-8785abcbf0c4 b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] [instance: b6b22803-169f-45be-85f7-058bfa3f2970] Start _get_guest_xml network_info=[{"id": "05122117-0522-4844-80d6-4425d6fae978", "address": "fa:16:3e:af:65:c9", "network": {"id": "9a42964e-1108-49cc-ac3f-41165766e2ed", "bridge": "br-int", "label": "tempest-TestServerBasicOps-201869635-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db1d07a763fd4c1d806a7cf648ffae54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05122117-05", "ovs_interfaceid": "05122117-0522-4844-80d6-4425d6fae978", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-01T09:41:33Z,direct_url=<?>,disk_format='qcow2',id=7ddeffd1-d06f-4a46-9e41-114974daa90e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='fac95b8a995a4174bfa966a8d9d9aa01',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-01T09:41:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'guest_format': None, 'encryption_options': None, 'device_type': 'disk', 'encryption_secret_uuid': None, 'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'image_id': '7ddeffd1-d06f-4a46-9e41-114974daa90e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  1 09:45:44 compute-0 nova_compute[189491]: 2025-12-01 09:45:44.029 189495 WARNING nova.virt.libvirt.driver [None req-a688fbcd-9610-4fd2-8f9b-8785abcbf0c4 b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 09:45:44 compute-0 nova_compute[189491]: 2025-12-01 09:45:44.034 189495 DEBUG nova.virt.libvirt.host [None req-a688fbcd-9610-4fd2-8f9b-8785abcbf0c4 b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  1 09:45:44 compute-0 nova_compute[189491]: 2025-12-01 09:45:44.035 189495 DEBUG nova.virt.libvirt.host [None req-a688fbcd-9610-4fd2-8f9b-8785abcbf0c4 b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  1 09:45:44 compute-0 nova_compute[189491]: 2025-12-01 09:45:44.040 189495 DEBUG nova.virt.libvirt.host [None req-a688fbcd-9610-4fd2-8f9b-8785abcbf0c4 b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  1 09:45:44 compute-0 nova_compute[189491]: 2025-12-01 09:45:44.040 189495 DEBUG nova.virt.libvirt.host [None req-a688fbcd-9610-4fd2-8f9b-8785abcbf0c4 b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  1 09:45:44 compute-0 nova_compute[189491]: 2025-12-01 09:45:44.041 189495 DEBUG nova.virt.libvirt.driver [None req-a688fbcd-9610-4fd2-8f9b-8785abcbf0c4 b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  1 09:45:44 compute-0 nova_compute[189491]: 2025-12-01 09:45:44.041 189495 DEBUG nova.virt.hardware [None req-a688fbcd-9610-4fd2-8f9b-8785abcbf0c4 b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-01T09:41:32Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='422f041c-a187-4aa2-8167-37f3eb0e89c2',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-01T09:41:33Z,direct_url=<?>,disk_format='qcow2',id=7ddeffd1-d06f-4a46-9e41-114974daa90e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='fac95b8a995a4174bfa966a8d9d9aa01',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-01T09:41:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  1 09:45:44 compute-0 nova_compute[189491]: 2025-12-01 09:45:44.042 189495 DEBUG nova.virt.hardware [None req-a688fbcd-9610-4fd2-8f9b-8785abcbf0c4 b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  1 09:45:44 compute-0 nova_compute[189491]: 2025-12-01 09:45:44.042 189495 DEBUG nova.virt.hardware [None req-a688fbcd-9610-4fd2-8f9b-8785abcbf0c4 b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  1 09:45:44 compute-0 nova_compute[189491]: 2025-12-01 09:45:44.042 189495 DEBUG nova.virt.hardware [None req-a688fbcd-9610-4fd2-8f9b-8785abcbf0c4 b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  1 09:45:44 compute-0 nova_compute[189491]: 2025-12-01 09:45:44.042 189495 DEBUG nova.virt.hardware [None req-a688fbcd-9610-4fd2-8f9b-8785abcbf0c4 b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  1 09:45:44 compute-0 nova_compute[189491]: 2025-12-01 09:45:44.043 189495 DEBUG nova.virt.hardware [None req-a688fbcd-9610-4fd2-8f9b-8785abcbf0c4 b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  1 09:45:44 compute-0 nova_compute[189491]: 2025-12-01 09:45:44.043 189495 DEBUG nova.virt.hardware [None req-a688fbcd-9610-4fd2-8f9b-8785abcbf0c4 b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  1 09:45:44 compute-0 nova_compute[189491]: 2025-12-01 09:45:44.043 189495 DEBUG nova.virt.hardware [None req-a688fbcd-9610-4fd2-8f9b-8785abcbf0c4 b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  1 09:45:44 compute-0 nova_compute[189491]: 2025-12-01 09:45:44.043 189495 DEBUG nova.virt.hardware [None req-a688fbcd-9610-4fd2-8f9b-8785abcbf0c4 b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  1 09:45:44 compute-0 nova_compute[189491]: 2025-12-01 09:45:44.044 189495 DEBUG nova.virt.hardware [None req-a688fbcd-9610-4fd2-8f9b-8785abcbf0c4 b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  1 09:45:44 compute-0 nova_compute[189491]: 2025-12-01 09:45:44.044 189495 DEBUG nova.virt.hardware [None req-a688fbcd-9610-4fd2-8f9b-8785abcbf0c4 b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  1 09:45:44 compute-0 nova_compute[189491]: 2025-12-01 09:45:44.048 189495 DEBUG nova.virt.libvirt.vif [None req-a688fbcd-9610-4fd2-8f9b-8785abcbf0c4 b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-01T09:45:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-1504290779',display_name='tempest-TestServerBasicOps-server-1504290779',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-1504290779',id=13,image_ref='7ddeffd1-d06f-4a46-9e41-114974daa90e',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEsyVwDEy9zFWo1byh4pafiOXmiB/WkK4D/hrDdFOv34J8k/xsRd1CCuGmvU2MUbCoy8qNShC4AQphvN5GZVeRhwJHN24UHvx0V+AFb/wVWYzmICwY2RteV99ijJRZ3ZZg==',key_name='tempest-TestServerBasicOps-1010317755',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='db1d07a763fd4c1d806a7cf648ffae54',ramdisk_id='',reservation_id='r-mnemcuob',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7ddeffd1-d06f-4a46-9e41-114974daa90e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerBasicOps-818581629',owner_user_name='tempest-TestServerBasicOps-818581629-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-01T09:45:39Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b40ddefd6a0e437e95ddb1bc36d5ec0b',uuid=b6b22803-169f-45be-85f7-058bfa3f2970,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "05122117-0522-4844-80d6-4425d6fae978", "address": "fa:16:3e:af:65:c9", "network": {"id": "9a42964e-1108-49cc-ac3f-41165766e2ed", "bridge": "br-int", "label": "tempest-TestServerBasicOps-201869635-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db1d07a763fd4c1d806a7cf648ffae54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05122117-05", "ovs_interfaceid": "05122117-0522-4844-80d6-4425d6fae978", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  1 09:45:44 compute-0 nova_compute[189491]: 2025-12-01 09:45:44.048 189495 DEBUG nova.network.os_vif_util [None req-a688fbcd-9610-4fd2-8f9b-8785abcbf0c4 b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] Converting VIF {"id": "05122117-0522-4844-80d6-4425d6fae978", "address": "fa:16:3e:af:65:c9", "network": {"id": "9a42964e-1108-49cc-ac3f-41165766e2ed", "bridge": "br-int", "label": "tempest-TestServerBasicOps-201869635-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db1d07a763fd4c1d806a7cf648ffae54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05122117-05", "ovs_interfaceid": "05122117-0522-4844-80d6-4425d6fae978", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 09:45:44 compute-0 nova_compute[189491]: 2025-12-01 09:45:44.048 189495 DEBUG nova.network.os_vif_util [None req-a688fbcd-9610-4fd2-8f9b-8785abcbf0c4 b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:af:65:c9,bridge_name='br-int',has_traffic_filtering=True,id=05122117-0522-4844-80d6-4425d6fae978,network=Network(9a42964e-1108-49cc-ac3f-41165766e2ed),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05122117-05') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 09:45:44 compute-0 nova_compute[189491]: 2025-12-01 09:45:44.049 189495 DEBUG nova.objects.instance [None req-a688fbcd-9610-4fd2-8f9b-8785abcbf0c4 b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] Lazy-loading 'pci_devices' on Instance uuid b6b22803-169f-45be-85f7-058bfa3f2970 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 09:45:44 compute-0 nova_compute[189491]: 2025-12-01 09:45:44.064 189495 DEBUG nova.virt.libvirt.driver [None req-a688fbcd-9610-4fd2-8f9b-8785abcbf0c4 b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] [instance: b6b22803-169f-45be-85f7-058bfa3f2970] End _get_guest_xml xml=<domain type="kvm">
Dec  1 09:45:44 compute-0 nova_compute[189491]:  <uuid>b6b22803-169f-45be-85f7-058bfa3f2970</uuid>
Dec  1 09:45:44 compute-0 nova_compute[189491]:  <name>instance-0000000d</name>
Dec  1 09:45:44 compute-0 nova_compute[189491]:  <memory>131072</memory>
Dec  1 09:45:44 compute-0 nova_compute[189491]:  <vcpu>1</vcpu>
Dec  1 09:45:44 compute-0 nova_compute[189491]:  <metadata>
Dec  1 09:45:44 compute-0 nova_compute[189491]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  1 09:45:44 compute-0 nova_compute[189491]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  1 09:45:44 compute-0 nova_compute[189491]:      <nova:name>tempest-TestServerBasicOps-server-1504290779</nova:name>
Dec  1 09:45:44 compute-0 nova_compute[189491]:      <nova:creationTime>2025-12-01 09:45:44</nova:creationTime>
Dec  1 09:45:44 compute-0 nova_compute[189491]:      <nova:flavor name="m1.nano">
Dec  1 09:45:44 compute-0 nova_compute[189491]:        <nova:memory>128</nova:memory>
Dec  1 09:45:44 compute-0 nova_compute[189491]:        <nova:disk>1</nova:disk>
Dec  1 09:45:44 compute-0 nova_compute[189491]:        <nova:swap>0</nova:swap>
Dec  1 09:45:44 compute-0 nova_compute[189491]:        <nova:ephemeral>0</nova:ephemeral>
Dec  1 09:45:44 compute-0 nova_compute[189491]:        <nova:vcpus>1</nova:vcpus>
Dec  1 09:45:44 compute-0 nova_compute[189491]:      </nova:flavor>
Dec  1 09:45:44 compute-0 nova_compute[189491]:      <nova:owner>
Dec  1 09:45:44 compute-0 nova_compute[189491]:        <nova:user uuid="b40ddefd6a0e437e95ddb1bc36d5ec0b">tempest-TestServerBasicOps-818581629-project-member</nova:user>
Dec  1 09:45:44 compute-0 nova_compute[189491]:        <nova:project uuid="db1d07a763fd4c1d806a7cf648ffae54">tempest-TestServerBasicOps-818581629</nova:project>
Dec  1 09:45:44 compute-0 nova_compute[189491]:      </nova:owner>
Dec  1 09:45:44 compute-0 nova_compute[189491]:      <nova:root type="image" uuid="7ddeffd1-d06f-4a46-9e41-114974daa90e"/>
Dec  1 09:45:44 compute-0 nova_compute[189491]:      <nova:ports>
Dec  1 09:45:44 compute-0 nova_compute[189491]:        <nova:port uuid="05122117-0522-4844-80d6-4425d6fae978">
Dec  1 09:45:44 compute-0 nova_compute[189491]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Dec  1 09:45:44 compute-0 nova_compute[189491]:        </nova:port>
Dec  1 09:45:44 compute-0 nova_compute[189491]:      </nova:ports>
Dec  1 09:45:44 compute-0 nova_compute[189491]:    </nova:instance>
Dec  1 09:45:44 compute-0 nova_compute[189491]:  </metadata>
Dec  1 09:45:44 compute-0 nova_compute[189491]:  <sysinfo type="smbios">
Dec  1 09:45:44 compute-0 nova_compute[189491]:    <system>
Dec  1 09:45:44 compute-0 nova_compute[189491]:      <entry name="manufacturer">RDO</entry>
Dec  1 09:45:44 compute-0 nova_compute[189491]:      <entry name="product">OpenStack Compute</entry>
Dec  1 09:45:44 compute-0 nova_compute[189491]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  1 09:45:44 compute-0 nova_compute[189491]:      <entry name="serial">b6b22803-169f-45be-85f7-058bfa3f2970</entry>
Dec  1 09:45:44 compute-0 nova_compute[189491]:      <entry name="uuid">b6b22803-169f-45be-85f7-058bfa3f2970</entry>
Dec  1 09:45:44 compute-0 nova_compute[189491]:      <entry name="family">Virtual Machine</entry>
Dec  1 09:45:44 compute-0 nova_compute[189491]:    </system>
Dec  1 09:45:44 compute-0 nova_compute[189491]:  </sysinfo>
Dec  1 09:45:44 compute-0 nova_compute[189491]:  <os>
Dec  1 09:45:44 compute-0 nova_compute[189491]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  1 09:45:44 compute-0 nova_compute[189491]:    <boot dev="hd"/>
Dec  1 09:45:44 compute-0 nova_compute[189491]:    <smbios mode="sysinfo"/>
Dec  1 09:45:44 compute-0 nova_compute[189491]:  </os>
Dec  1 09:45:44 compute-0 nova_compute[189491]:  <features>
Dec  1 09:45:44 compute-0 nova_compute[189491]:    <acpi/>
Dec  1 09:45:44 compute-0 nova_compute[189491]:    <apic/>
Dec  1 09:45:44 compute-0 nova_compute[189491]:    <vmcoreinfo/>
Dec  1 09:45:44 compute-0 nova_compute[189491]:  </features>
Dec  1 09:45:44 compute-0 nova_compute[189491]:  <clock offset="utc">
Dec  1 09:45:44 compute-0 nova_compute[189491]:    <timer name="pit" tickpolicy="delay"/>
Dec  1 09:45:44 compute-0 nova_compute[189491]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  1 09:45:44 compute-0 nova_compute[189491]:    <timer name="hpet" present="no"/>
Dec  1 09:45:44 compute-0 nova_compute[189491]:  </clock>
Dec  1 09:45:44 compute-0 nova_compute[189491]:  <cpu mode="host-model" match="exact">
Dec  1 09:45:44 compute-0 nova_compute[189491]:    <topology sockets="1" cores="1" threads="1"/>
Dec  1 09:45:44 compute-0 nova_compute[189491]:  </cpu>
Dec  1 09:45:44 compute-0 nova_compute[189491]:  <devices>
Dec  1 09:45:44 compute-0 nova_compute[189491]:    <disk type="file" device="disk">
Dec  1 09:45:44 compute-0 nova_compute[189491]:      <driver name="qemu" type="qcow2" cache="none"/>
Dec  1 09:45:44 compute-0 nova_compute[189491]:      <source file="/var/lib/nova/instances/b6b22803-169f-45be-85f7-058bfa3f2970/disk"/>
Dec  1 09:45:44 compute-0 nova_compute[189491]:      <target dev="vda" bus="virtio"/>
Dec  1 09:45:44 compute-0 nova_compute[189491]:    </disk>
Dec  1 09:45:44 compute-0 nova_compute[189491]:    <disk type="file" device="cdrom">
Dec  1 09:45:44 compute-0 nova_compute[189491]:      <driver name="qemu" type="raw" cache="none"/>
Dec  1 09:45:44 compute-0 nova_compute[189491]:      <source file="/var/lib/nova/instances/b6b22803-169f-45be-85f7-058bfa3f2970/disk.config"/>
Dec  1 09:45:44 compute-0 nova_compute[189491]:      <target dev="sda" bus="sata"/>
Dec  1 09:45:44 compute-0 nova_compute[189491]:    </disk>
Dec  1 09:45:44 compute-0 nova_compute[189491]:    <interface type="ethernet">
Dec  1 09:45:44 compute-0 nova_compute[189491]:      <mac address="fa:16:3e:af:65:c9"/>
Dec  1 09:45:44 compute-0 nova_compute[189491]:      <model type="virtio"/>
Dec  1 09:45:44 compute-0 nova_compute[189491]:      <driver name="vhost" rx_queue_size="512"/>
Dec  1 09:45:44 compute-0 nova_compute[189491]:      <mtu size="1442"/>
Dec  1 09:45:44 compute-0 nova_compute[189491]:      <target dev="tap05122117-05"/>
Dec  1 09:45:44 compute-0 nova_compute[189491]:    </interface>
Dec  1 09:45:44 compute-0 nova_compute[189491]:    <serial type="pty">
Dec  1 09:45:44 compute-0 nova_compute[189491]:      <log file="/var/lib/nova/instances/b6b22803-169f-45be-85f7-058bfa3f2970/console.log" append="off"/>
Dec  1 09:45:44 compute-0 nova_compute[189491]:    </serial>
Dec  1 09:45:44 compute-0 nova_compute[189491]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  1 09:45:44 compute-0 nova_compute[189491]:    <video>
Dec  1 09:45:44 compute-0 nova_compute[189491]:      <model type="virtio"/>
Dec  1 09:45:44 compute-0 nova_compute[189491]:    </video>
Dec  1 09:45:44 compute-0 nova_compute[189491]:    <input type="tablet" bus="usb"/>
Dec  1 09:45:44 compute-0 nova_compute[189491]:    <rng model="virtio">
Dec  1 09:45:44 compute-0 nova_compute[189491]:      <backend model="random">/dev/urandom</backend>
Dec  1 09:45:44 compute-0 nova_compute[189491]:    </rng>
Dec  1 09:45:44 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root"/>
Dec  1 09:45:44 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:45:44 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:45:44 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:45:44 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:45:44 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:45:44 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:45:44 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:45:44 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:45:44 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:45:44 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:45:44 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:45:44 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:45:44 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:45:44 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:45:44 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:45:44 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:45:44 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:45:44 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:45:44 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:45:44 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:45:44 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:45:44 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:45:44 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:45:44 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:45:44 compute-0 nova_compute[189491]:    <controller type="usb" index="0"/>
Dec  1 09:45:44 compute-0 nova_compute[189491]:    <memballoon model="virtio">
Dec  1 09:45:44 compute-0 nova_compute[189491]:      <stats period="10"/>
Dec  1 09:45:44 compute-0 nova_compute[189491]:    </memballoon>
Dec  1 09:45:44 compute-0 nova_compute[189491]:  </devices>
Dec  1 09:45:44 compute-0 nova_compute[189491]: </domain>
Dec  1 09:45:44 compute-0 nova_compute[189491]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  1 09:45:44 compute-0 nova_compute[189491]: 2025-12-01 09:45:44.065 189495 DEBUG nova.compute.manager [None req-a688fbcd-9610-4fd2-8f9b-8785abcbf0c4 b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] [instance: b6b22803-169f-45be-85f7-058bfa3f2970] Preparing to wait for external event network-vif-plugged-05122117-0522-4844-80d6-4425d6fae978 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  1 09:45:44 compute-0 nova_compute[189491]: 2025-12-01 09:45:44.065 189495 DEBUG oslo_concurrency.lockutils [None req-a688fbcd-9610-4fd2-8f9b-8785abcbf0c4 b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] Acquiring lock "b6b22803-169f-45be-85f7-058bfa3f2970-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:45:44 compute-0 nova_compute[189491]: 2025-12-01 09:45:44.065 189495 DEBUG oslo_concurrency.lockutils [None req-a688fbcd-9610-4fd2-8f9b-8785abcbf0c4 b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] Lock "b6b22803-169f-45be-85f7-058bfa3f2970-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:45:44 compute-0 nova_compute[189491]: 2025-12-01 09:45:44.065 189495 DEBUG oslo_concurrency.lockutils [None req-a688fbcd-9610-4fd2-8f9b-8785abcbf0c4 b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] Lock "b6b22803-169f-45be-85f7-058bfa3f2970-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:45:44 compute-0 nova_compute[189491]: 2025-12-01 09:45:44.066 189495 DEBUG nova.virt.libvirt.vif [None req-a688fbcd-9610-4fd2-8f9b-8785abcbf0c4 b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-01T09:45:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-1504290779',display_name='tempest-TestServerBasicOps-server-1504290779',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-1504290779',id=13,image_ref='7ddeffd1-d06f-4a46-9e41-114974daa90e',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEsyVwDEy9zFWo1byh4pafiOXmiB/WkK4D/hrDdFOv34J8k/xsRd1CCuGmvU2MUbCoy8qNShC4AQphvN5GZVeRhwJHN24UHvx0V+AFb/wVWYzmICwY2RteV99ijJRZ3ZZg==',key_name='tempest-TestServerBasicOps-1010317755',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='db1d07a763fd4c1d806a7cf648ffae54',ramdisk_id='',reservation_id='r-mnemcuob',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7ddeffd1-d06f-4a46-9e41-114974daa90e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerBasicOps-818581629',owner_user_name='tempest-TestServerBasicOps-818581629-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-01T09:45:39Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b40ddefd6a0e437e95ddb1bc36d5ec0b',uuid=b6b22803-169f-45be-85f7-058bfa3f2970,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "05122117-0522-4844-80d6-4425d6fae978", "address": "fa:16:3e:af:65:c9", "network": {"id": "9a42964e-1108-49cc-ac3f-41165766e2ed", "bridge": "br-int", "label": "tempest-TestServerBasicOps-201869635-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db1d07a763fd4c1d806a7cf648ffae54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05122117-05", "ovs_interfaceid": "05122117-0522-4844-80d6-4425d6fae978", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  1 09:45:44 compute-0 nova_compute[189491]: 2025-12-01 09:45:44.066 189495 DEBUG nova.network.os_vif_util [None req-a688fbcd-9610-4fd2-8f9b-8785abcbf0c4 b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] Converting VIF {"id": "05122117-0522-4844-80d6-4425d6fae978", "address": "fa:16:3e:af:65:c9", "network": {"id": "9a42964e-1108-49cc-ac3f-41165766e2ed", "bridge": "br-int", "label": "tempest-TestServerBasicOps-201869635-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db1d07a763fd4c1d806a7cf648ffae54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05122117-05", "ovs_interfaceid": "05122117-0522-4844-80d6-4425d6fae978", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 09:45:44 compute-0 nova_compute[189491]: 2025-12-01 09:45:44.067 189495 DEBUG nova.network.os_vif_util [None req-a688fbcd-9610-4fd2-8f9b-8785abcbf0c4 b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:af:65:c9,bridge_name='br-int',has_traffic_filtering=True,id=05122117-0522-4844-80d6-4425d6fae978,network=Network(9a42964e-1108-49cc-ac3f-41165766e2ed),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05122117-05') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 09:45:44 compute-0 nova_compute[189491]: 2025-12-01 09:45:44.067 189495 DEBUG os_vif [None req-a688fbcd-9610-4fd2-8f9b-8785abcbf0c4 b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:af:65:c9,bridge_name='br-int',has_traffic_filtering=True,id=05122117-0522-4844-80d6-4425d6fae978,network=Network(9a42964e-1108-49cc-ac3f-41165766e2ed),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05122117-05') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  1 09:45:44 compute-0 nova_compute[189491]: 2025-12-01 09:45:44.068 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:45:44 compute-0 nova_compute[189491]: 2025-12-01 09:45:44.068 189495 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:45:44 compute-0 nova_compute[189491]: 2025-12-01 09:45:44.069 189495 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 09:45:44 compute-0 nova_compute[189491]: 2025-12-01 09:45:44.072 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:45:44 compute-0 nova_compute[189491]: 2025-12-01 09:45:44.072 189495 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap05122117-05, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:45:44 compute-0 nova_compute[189491]: 2025-12-01 09:45:44.073 189495 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap05122117-05, col_values=(('external_ids', {'iface-id': '05122117-0522-4844-80d6-4425d6fae978', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:af:65:c9', 'vm-uuid': 'b6b22803-169f-45be-85f7-058bfa3f2970'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:45:44 compute-0 NetworkManager[56318]: <info>  [1764582344.0756] manager: (tap05122117-05): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/65)
Dec  1 09:45:44 compute-0 nova_compute[189491]: 2025-12-01 09:45:44.076 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  1 09:45:44 compute-0 nova_compute[189491]: 2025-12-01 09:45:44.083 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:45:44 compute-0 nova_compute[189491]: 2025-12-01 09:45:44.084 189495 INFO os_vif [None req-a688fbcd-9610-4fd2-8f9b-8785abcbf0c4 b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:af:65:c9,bridge_name='br-int',has_traffic_filtering=True,id=05122117-0522-4844-80d6-4425d6fae978,network=Network(9a42964e-1108-49cc-ac3f-41165766e2ed),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05122117-05')#033[00m
Dec  1 09:45:44 compute-0 nova_compute[189491]: 2025-12-01 09:45:44.154 189495 DEBUG nova.virt.libvirt.driver [None req-a688fbcd-9610-4fd2-8f9b-8785abcbf0c4 b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  1 09:45:44 compute-0 nova_compute[189491]: 2025-12-01 09:45:44.155 189495 DEBUG nova.virt.libvirt.driver [None req-a688fbcd-9610-4fd2-8f9b-8785abcbf0c4 b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  1 09:45:44 compute-0 nova_compute[189491]: 2025-12-01 09:45:44.155 189495 DEBUG nova.virt.libvirt.driver [None req-a688fbcd-9610-4fd2-8f9b-8785abcbf0c4 b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] No VIF found with MAC fa:16:3e:af:65:c9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  1 09:45:44 compute-0 nova_compute[189491]: 2025-12-01 09:45:44.156 189495 INFO nova.virt.libvirt.driver [None req-a688fbcd-9610-4fd2-8f9b-8785abcbf0c4 b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] [instance: b6b22803-169f-45be-85f7-058bfa3f2970] Using config drive#033[00m
Dec  1 09:45:44 compute-0 podman[254419]: 2025-12-01 09:45:44.222681709 +0000 UTC m=+0.099470440 container health_status 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  1 09:45:44 compute-0 podman[254420]: 2025-12-01 09:45:44.282196982 +0000 UTC m=+0.152737261 container health_status 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  1 09:45:44 compute-0 nova_compute[189491]: 2025-12-01 09:45:44.456 189495 DEBUG nova.compute.manager [req-5dcca5cd-7f02-45dc-ba3b-182e33946fdc req-b464a9a8-275d-434c-964a-176756a99106 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: b6b22803-169f-45be-85f7-058bfa3f2970] Received event network-changed-05122117-0522-4844-80d6-4425d6fae978 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 09:45:44 compute-0 nova_compute[189491]: 2025-12-01 09:45:44.456 189495 DEBUG nova.compute.manager [req-5dcca5cd-7f02-45dc-ba3b-182e33946fdc req-b464a9a8-275d-434c-964a-176756a99106 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: b6b22803-169f-45be-85f7-058bfa3f2970] Refreshing instance network info cache due to event network-changed-05122117-0522-4844-80d6-4425d6fae978. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  1 09:45:44 compute-0 nova_compute[189491]: 2025-12-01 09:45:44.456 189495 DEBUG oslo_concurrency.lockutils [req-5dcca5cd-7f02-45dc-ba3b-182e33946fdc req-b464a9a8-275d-434c-964a-176756a99106 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Acquiring lock "refresh_cache-b6b22803-169f-45be-85f7-058bfa3f2970" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 09:45:44 compute-0 nova_compute[189491]: 2025-12-01 09:45:44.457 189495 DEBUG oslo_concurrency.lockutils [req-5dcca5cd-7f02-45dc-ba3b-182e33946fdc req-b464a9a8-275d-434c-964a-176756a99106 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Acquired lock "refresh_cache-b6b22803-169f-45be-85f7-058bfa3f2970" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 09:45:44 compute-0 nova_compute[189491]: 2025-12-01 09:45:44.457 189495 DEBUG nova.network.neutron [req-5dcca5cd-7f02-45dc-ba3b-182e33946fdc req-b464a9a8-275d-434c-964a-176756a99106 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: b6b22803-169f-45be-85f7-058bfa3f2970] Refreshing network info cache for port 05122117-0522-4844-80d6-4425d6fae978 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  1 09:45:44 compute-0 nova_compute[189491]: 2025-12-01 09:45:44.900 189495 INFO nova.virt.libvirt.driver [None req-a688fbcd-9610-4fd2-8f9b-8785abcbf0c4 b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] [instance: b6b22803-169f-45be-85f7-058bfa3f2970] Creating config drive at /var/lib/nova/instances/b6b22803-169f-45be-85f7-058bfa3f2970/disk.config#033[00m
Dec  1 09:45:44 compute-0 nova_compute[189491]: 2025-12-01 09:45:44.906 189495 DEBUG oslo_concurrency.processutils [None req-a688fbcd-9610-4fd2-8f9b-8785abcbf0c4 b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b6b22803-169f-45be-85f7-058bfa3f2970/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnryhkda7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:45:45 compute-0 nova_compute[189491]: 2025-12-01 09:45:45.036 189495 DEBUG oslo_concurrency.processutils [None req-a688fbcd-9610-4fd2-8f9b-8785abcbf0c4 b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b6b22803-169f-45be-85f7-058bfa3f2970/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnryhkda7" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:45:45 compute-0 NetworkManager[56318]: <info>  [1764582345.1442] manager: (tap05122117-05): new Tun device (/org/freedesktop/NetworkManager/Devices/66)
Dec  1 09:45:45 compute-0 kernel: tap05122117-05: entered promiscuous mode
Dec  1 09:45:45 compute-0 nova_compute[189491]: 2025-12-01 09:45:45.158 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:45:45 compute-0 ovn_controller[97794]: 2025-12-01T09:45:45Z|00135|binding|INFO|Claiming lport 05122117-0522-4844-80d6-4425d6fae978 for this chassis.
Dec  1 09:45:45 compute-0 ovn_controller[97794]: 2025-12-01T09:45:45Z|00136|binding|INFO|05122117-0522-4844-80d6-4425d6fae978: Claiming fa:16:3e:af:65:c9 10.100.0.9
Dec  1 09:45:45 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:45:45.185 106659 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:af:65:c9 10.100.0.9'], port_security=['fa:16:3e:af:65:c9 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'b6b22803-169f-45be-85f7-058bfa3f2970', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9a42964e-1108-49cc-ac3f-41165766e2ed', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db1d07a763fd4c1d806a7cf648ffae54', 'neutron:revision_number': '2', 'neutron:security_group_ids': '069c984d-c26e-4a65-8713-d57ad23780ec a20c149f-05db-4aff-83b9-441644898711', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3f98b73b-931c-4f7b-978d-72f3c89b3942, chassis=[<ovs.db.idl.Row object at 0x7ff7f12c3670>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff7f12c3670>], logical_port=05122117-0522-4844-80d6-4425d6fae978) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 09:45:45 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:45:45.188 106659 INFO neutron.agent.ovn.metadata.agent [-] Port 05122117-0522-4844-80d6-4425d6fae978 in datapath 9a42964e-1108-49cc-ac3f-41165766e2ed bound to our chassis#033[00m
Dec  1 09:45:45 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:45:45.192 106659 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9a42964e-1108-49cc-ac3f-41165766e2ed#033[00m
Dec  1 09:45:45 compute-0 nova_compute[189491]: 2025-12-01 09:45:45.209 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:45:45 compute-0 ovn_controller[97794]: 2025-12-01T09:45:45Z|00137|binding|INFO|Setting lport 05122117-0522-4844-80d6-4425d6fae978 ovn-installed in OVS
Dec  1 09:45:45 compute-0 ovn_controller[97794]: 2025-12-01T09:45:45Z|00138|binding|INFO|Setting lport 05122117-0522-4844-80d6-4425d6fae978 up in Southbound
Dec  1 09:45:45 compute-0 systemd-udevd[254481]: Network interface NamePolicy= disabled on kernel command line.
Dec  1 09:45:45 compute-0 nova_compute[189491]: 2025-12-01 09:45:45.213 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:45:45 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:45:45.214 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[b63528fb-2498-4a87-80bd-dde3da83c736]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:45:45 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:45:45.215 106659 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9a42964e-11 in ovnmeta-9a42964e-1108-49cc-ac3f-41165766e2ed namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  1 09:45:45 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:45:45.217 239818 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9a42964e-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  1 09:45:45 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:45:45.217 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[1c380f4e-5ba8-46d4-b1db-5b9d621f7fde]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:45:45 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:45:45.219 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[b24e5b9c-c444-48d9-998e-5cad456e3747]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:45:45 compute-0 systemd-machined[155812]: New machine qemu-14-instance-0000000d.
Dec  1 09:45:45 compute-0 NetworkManager[56318]: <info>  [1764582345.2317] device (tap05122117-05): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  1 09:45:45 compute-0 NetworkManager[56318]: <info>  [1764582345.2368] device (tap05122117-05): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  1 09:45:45 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:45:45.236 106797 DEBUG oslo.privsep.daemon [-] privsep: reply[05a3b415-fbd1-4032-8896-078e4b8827aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:45:45 compute-0 systemd[1]: Started Virtual Machine qemu-14-instance-0000000d.
Dec  1 09:45:45 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:45:45.266 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[165da0f5-4948-4f41-84bc-b73bd713e6cf]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:45:45 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:45:45.307 239843 DEBUG oslo.privsep.daemon [-] privsep: reply[e7187163-299e-41dd-b360-b2124dc43fb2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:45:45 compute-0 systemd-udevd[254485]: Network interface NamePolicy= disabled on kernel command line.
Dec  1 09:45:45 compute-0 NetworkManager[56318]: <info>  [1764582345.3189] manager: (tap9a42964e-10): new Veth device (/org/freedesktop/NetworkManager/Devices/67)
Dec  1 09:45:45 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:45:45.316 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[de898fd1-356e-4116-a5d5-034d3c5a7bd1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:45:45 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:45:45.375 239843 DEBUG oslo.privsep.daemon [-] privsep: reply[0d1a43d9-676a-4d14-8fee-0e796279955c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:45:45 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:45:45.380 239843 DEBUG oslo.privsep.daemon [-] privsep: reply[dd70fc99-e135-4e7e-9f55-3639a3424a41]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:45:45 compute-0 NetworkManager[56318]: <info>  [1764582345.4166] device (tap9a42964e-10): carrier: link connected
Dec  1 09:45:45 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:45:45.422 239843 DEBUG oslo.privsep.daemon [-] privsep: reply[308c26fb-416e-4edd-840c-ef036e2510d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:45:45 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:45:45.449 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[7c93efce-107c-4928-98d1-006e7fc29c06]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9a42964e-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:00:f1:33'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 41], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 558511, 'reachable_time': 32079, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 254514, 'error': None, 'target': 'ovnmeta-9a42964e-1108-49cc-ac3f-41165766e2ed', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:45:45 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:45:45.469 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[7db26836-8cd5-45fb-87af-5eab0521086e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe00:f133'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 558511, 'tstamp': 558511}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 254515, 'error': None, 'target': 'ovnmeta-9a42964e-1108-49cc-ac3f-41165766e2ed', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:45:45 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:45:45.498 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[6b11c659-b4f0-4cd9-a0c1-6992856876f2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9a42964e-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:00:f1:33'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 41], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 558511, 'reachable_time': 32079, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 254518, 'error': None, 'target': 'ovnmeta-9a42964e-1108-49cc-ac3f-41165766e2ed', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:45:45 compute-0 nova_compute[189491]: 2025-12-01 09:45:45.524 189495 DEBUG nova.compute.manager [req-533bd88d-cf8f-4b51-9e80-16d4ef1b1959 req-bfb6f44a-3ac0-4011-8b56-c15387ceb07e ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: b6b22803-169f-45be-85f7-058bfa3f2970] Received event network-vif-plugged-05122117-0522-4844-80d6-4425d6fae978 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 09:45:45 compute-0 nova_compute[189491]: 2025-12-01 09:45:45.525 189495 DEBUG oslo_concurrency.lockutils [req-533bd88d-cf8f-4b51-9e80-16d4ef1b1959 req-bfb6f44a-3ac0-4011-8b56-c15387ceb07e ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Acquiring lock "b6b22803-169f-45be-85f7-058bfa3f2970-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:45:45 compute-0 nova_compute[189491]: 2025-12-01 09:45:45.525 189495 DEBUG oslo_concurrency.lockutils [req-533bd88d-cf8f-4b51-9e80-16d4ef1b1959 req-bfb6f44a-3ac0-4011-8b56-c15387ceb07e ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Lock "b6b22803-169f-45be-85f7-058bfa3f2970-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:45:45 compute-0 nova_compute[189491]: 2025-12-01 09:45:45.526 189495 DEBUG oslo_concurrency.lockutils [req-533bd88d-cf8f-4b51-9e80-16d4ef1b1959 req-bfb6f44a-3ac0-4011-8b56-c15387ceb07e ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Lock "b6b22803-169f-45be-85f7-058bfa3f2970-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:45:45 compute-0 nova_compute[189491]: 2025-12-01 09:45:45.526 189495 DEBUG nova.compute.manager [req-533bd88d-cf8f-4b51-9e80-16d4ef1b1959 req-bfb6f44a-3ac0-4011-8b56-c15387ceb07e ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: b6b22803-169f-45be-85f7-058bfa3f2970] Processing event network-vif-plugged-05122117-0522-4844-80d6-4425d6fae978 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  1 09:45:45 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:45:45.548 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[5c645268-a441-4c4d-aac0-7c1131070719]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:45:45 compute-0 nova_compute[189491]: 2025-12-01 09:45:45.634 189495 DEBUG nova.virt.driver [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] Emitting event <LifecycleEvent: 1764582345.633346, b6b22803-169f-45be-85f7-058bfa3f2970 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 09:45:45 compute-0 nova_compute[189491]: 2025-12-01 09:45:45.635 189495 INFO nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: b6b22803-169f-45be-85f7-058bfa3f2970] VM Started (Lifecycle Event)#033[00m
Dec  1 09:45:45 compute-0 nova_compute[189491]: 2025-12-01 09:45:45.637 189495 DEBUG nova.compute.manager [None req-a688fbcd-9610-4fd2-8f9b-8785abcbf0c4 b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] [instance: b6b22803-169f-45be-85f7-058bfa3f2970] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  1 09:45:45 compute-0 nova_compute[189491]: 2025-12-01 09:45:45.642 189495 DEBUG nova.virt.libvirt.driver [None req-a688fbcd-9610-4fd2-8f9b-8785abcbf0c4 b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] [instance: b6b22803-169f-45be-85f7-058bfa3f2970] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  1 09:45:45 compute-0 nova_compute[189491]: 2025-12-01 09:45:45.647 189495 INFO nova.virt.libvirt.driver [-] [instance: b6b22803-169f-45be-85f7-058bfa3f2970] Instance spawned successfully.#033[00m
Dec  1 09:45:45 compute-0 nova_compute[189491]: 2025-12-01 09:45:45.648 189495 DEBUG nova.virt.libvirt.driver [None req-a688fbcd-9610-4fd2-8f9b-8785abcbf0c4 b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] [instance: b6b22803-169f-45be-85f7-058bfa3f2970] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  1 09:45:45 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:45:45.647 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[b06717d9-c703-4ff1-8e44-75d58faf1c34]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:45:45 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:45:45.649 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9a42964e-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:45:45 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:45:45.650 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 09:45:45 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:45:45.650 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9a42964e-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:45:45 compute-0 nova_compute[189491]: 2025-12-01 09:45:45.652 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:45:45 compute-0 kernel: tap9a42964e-10: entered promiscuous mode
Dec  1 09:45:45 compute-0 NetworkManager[56318]: <info>  [1764582345.6539] manager: (tap9a42964e-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/68)
Dec  1 09:45:45 compute-0 nova_compute[189491]: 2025-12-01 09:45:45.656 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:45:45 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:45:45.657 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9a42964e-10, col_values=(('external_ids', {'iface-id': '6265634a-8973-4de4-bd20-6e57721ad464'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:45:45 compute-0 nova_compute[189491]: 2025-12-01 09:45:45.658 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:45:45 compute-0 ovn_controller[97794]: 2025-12-01T09:45:45Z|00139|binding|INFO|Releasing lport 6265634a-8973-4de4-bd20-6e57721ad464 from this chassis (sb_readonly=0)
Dec  1 09:45:45 compute-0 nova_compute[189491]: 2025-12-01 09:45:45.660 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:45:45 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:45:45.661 106659 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9a42964e-1108-49cc-ac3f-41165766e2ed.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9a42964e-1108-49cc-ac3f-41165766e2ed.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  1 09:45:45 compute-0 nova_compute[189491]: 2025-12-01 09:45:45.669 189495 DEBUG nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: b6b22803-169f-45be-85f7-058bfa3f2970] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 09:45:45 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:45:45.663 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[7ded6106-b619-475f-8255-cd40fab74375]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:45:45 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:45:45.664 106659 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  1 09:45:45 compute-0 ovn_metadata_agent[106654]: global
Dec  1 09:45:45 compute-0 ovn_metadata_agent[106654]:    log         /dev/log local0 debug
Dec  1 09:45:45 compute-0 ovn_metadata_agent[106654]:    log-tag     haproxy-metadata-proxy-9a42964e-1108-49cc-ac3f-41165766e2ed
Dec  1 09:45:45 compute-0 ovn_metadata_agent[106654]:    user        root
Dec  1 09:45:45 compute-0 ovn_metadata_agent[106654]:    group       root
Dec  1 09:45:45 compute-0 ovn_metadata_agent[106654]:    maxconn     1024
Dec  1 09:45:45 compute-0 ovn_metadata_agent[106654]:    pidfile     /var/lib/neutron/external/pids/9a42964e-1108-49cc-ac3f-41165766e2ed.pid.haproxy
Dec  1 09:45:45 compute-0 ovn_metadata_agent[106654]:    daemon
Dec  1 09:45:45 compute-0 ovn_metadata_agent[106654]: 
Dec  1 09:45:45 compute-0 ovn_metadata_agent[106654]: defaults
Dec  1 09:45:45 compute-0 ovn_metadata_agent[106654]:    log global
Dec  1 09:45:45 compute-0 ovn_metadata_agent[106654]:    mode http
Dec  1 09:45:45 compute-0 ovn_metadata_agent[106654]:    option httplog
Dec  1 09:45:45 compute-0 ovn_metadata_agent[106654]:    option dontlognull
Dec  1 09:45:45 compute-0 ovn_metadata_agent[106654]:    option http-server-close
Dec  1 09:45:45 compute-0 ovn_metadata_agent[106654]:    option forwardfor
Dec  1 09:45:45 compute-0 ovn_metadata_agent[106654]:    retries                 3
Dec  1 09:45:45 compute-0 ovn_metadata_agent[106654]:    timeout http-request    30s
Dec  1 09:45:45 compute-0 ovn_metadata_agent[106654]:    timeout connect         30s
Dec  1 09:45:45 compute-0 ovn_metadata_agent[106654]:    timeout client          32s
Dec  1 09:45:45 compute-0 ovn_metadata_agent[106654]:    timeout server          32s
Dec  1 09:45:45 compute-0 ovn_metadata_agent[106654]:    timeout http-keep-alive 30s
Dec  1 09:45:45 compute-0 ovn_metadata_agent[106654]: 
Dec  1 09:45:45 compute-0 ovn_metadata_agent[106654]: 
Dec  1 09:45:45 compute-0 ovn_metadata_agent[106654]: listen listener
Dec  1 09:45:45 compute-0 ovn_metadata_agent[106654]:    bind 169.254.169.254:80
Dec  1 09:45:45 compute-0 ovn_metadata_agent[106654]:    server metadata /var/lib/neutron/metadata_proxy
Dec  1 09:45:45 compute-0 ovn_metadata_agent[106654]:    http-request add-header X-OVN-Network-ID 9a42964e-1108-49cc-ac3f-41165766e2ed
Dec  1 09:45:45 compute-0 ovn_metadata_agent[106654]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  1 09:45:45 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:45:45.665 106659 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9a42964e-1108-49cc-ac3f-41165766e2ed', 'env', 'PROCESS_TAG=haproxy-9a42964e-1108-49cc-ac3f-41165766e2ed', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9a42964e-1108-49cc-ac3f-41165766e2ed.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  1 09:45:45 compute-0 nova_compute[189491]: 2025-12-01 09:45:45.680 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:45:45 compute-0 nova_compute[189491]: 2025-12-01 09:45:45.696 189495 DEBUG nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: b6b22803-169f-45be-85f7-058bfa3f2970] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  1 09:45:45 compute-0 nova_compute[189491]: 2025-12-01 09:45:45.707 189495 DEBUG nova.virt.libvirt.driver [None req-a688fbcd-9610-4fd2-8f9b-8785abcbf0c4 b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] [instance: b6b22803-169f-45be-85f7-058bfa3f2970] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 09:45:45 compute-0 nova_compute[189491]: 2025-12-01 09:45:45.708 189495 DEBUG nova.virt.libvirt.driver [None req-a688fbcd-9610-4fd2-8f9b-8785abcbf0c4 b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] [instance: b6b22803-169f-45be-85f7-058bfa3f2970] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 09:45:45 compute-0 nova_compute[189491]: 2025-12-01 09:45:45.709 189495 DEBUG nova.virt.libvirt.driver [None req-a688fbcd-9610-4fd2-8f9b-8785abcbf0c4 b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] [instance: b6b22803-169f-45be-85f7-058bfa3f2970] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 09:45:45 compute-0 nova_compute[189491]: 2025-12-01 09:45:45.710 189495 DEBUG nova.virt.libvirt.driver [None req-a688fbcd-9610-4fd2-8f9b-8785abcbf0c4 b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] [instance: b6b22803-169f-45be-85f7-058bfa3f2970] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 09:45:45 compute-0 nova_compute[189491]: 2025-12-01 09:45:45.710 189495 DEBUG nova.virt.libvirt.driver [None req-a688fbcd-9610-4fd2-8f9b-8785abcbf0c4 b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] [instance: b6b22803-169f-45be-85f7-058bfa3f2970] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 09:45:45 compute-0 nova_compute[189491]: 2025-12-01 09:45:45.711 189495 DEBUG nova.virt.libvirt.driver [None req-a688fbcd-9610-4fd2-8f9b-8785abcbf0c4 b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] [instance: b6b22803-169f-45be-85f7-058bfa3f2970] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 09:45:45 compute-0 nova_compute[189491]: 2025-12-01 09:45:45.714 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:45:45 compute-0 nova_compute[189491]: 2025-12-01 09:45:45.722 189495 INFO nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: b6b22803-169f-45be-85f7-058bfa3f2970] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  1 09:45:45 compute-0 nova_compute[189491]: 2025-12-01 09:45:45.723 189495 DEBUG nova.virt.driver [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] Emitting event <LifecycleEvent: 1764582345.6335897, b6b22803-169f-45be-85f7-058bfa3f2970 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 09:45:45 compute-0 nova_compute[189491]: 2025-12-01 09:45:45.724 189495 INFO nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: b6b22803-169f-45be-85f7-058bfa3f2970] VM Paused (Lifecycle Event)#033[00m
Dec  1 09:45:45 compute-0 nova_compute[189491]: 2025-12-01 09:45:45.752 189495 DEBUG nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: b6b22803-169f-45be-85f7-058bfa3f2970] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 09:45:45 compute-0 nova_compute[189491]: 2025-12-01 09:45:45.759 189495 DEBUG nova.virt.driver [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] Emitting event <LifecycleEvent: 1764582345.641646, b6b22803-169f-45be-85f7-058bfa3f2970 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 09:45:45 compute-0 nova_compute[189491]: 2025-12-01 09:45:45.760 189495 INFO nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: b6b22803-169f-45be-85f7-058bfa3f2970] VM Resumed (Lifecycle Event)#033[00m
Dec  1 09:45:45 compute-0 nova_compute[189491]: 2025-12-01 09:45:45.779 189495 INFO nova.compute.manager [None req-a688fbcd-9610-4fd2-8f9b-8785abcbf0c4 b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] [instance: b6b22803-169f-45be-85f7-058bfa3f2970] Took 6.49 seconds to spawn the instance on the hypervisor.#033[00m
Dec  1 09:45:45 compute-0 nova_compute[189491]: 2025-12-01 09:45:45.781 189495 DEBUG nova.compute.manager [None req-a688fbcd-9610-4fd2-8f9b-8785abcbf0c4 b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] [instance: b6b22803-169f-45be-85f7-058bfa3f2970] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 09:45:45 compute-0 nova_compute[189491]: 2025-12-01 09:45:45.789 189495 DEBUG nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: b6b22803-169f-45be-85f7-058bfa3f2970] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 09:45:45 compute-0 nova_compute[189491]: 2025-12-01 09:45:45.800 189495 DEBUG nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: b6b22803-169f-45be-85f7-058bfa3f2970] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  1 09:45:45 compute-0 nova_compute[189491]: 2025-12-01 09:45:45.829 189495 INFO nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: b6b22803-169f-45be-85f7-058bfa3f2970] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  1 09:45:45 compute-0 nova_compute[189491]: 2025-12-01 09:45:45.868 189495 INFO nova.compute.manager [None req-a688fbcd-9610-4fd2-8f9b-8785abcbf0c4 b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] [instance: b6b22803-169f-45be-85f7-058bfa3f2970] Took 7.07 seconds to build instance.#033[00m
Dec  1 09:45:45 compute-0 nova_compute[189491]: 2025-12-01 09:45:45.887 189495 DEBUG oslo_concurrency.lockutils [None req-a688fbcd-9610-4fd2-8f9b-8785abcbf0c4 b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] Lock "b6b22803-169f-45be-85f7-058bfa3f2970" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.228s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:45:46 compute-0 nova_compute[189491]: 2025-12-01 09:45:46.026 189495 DEBUG nova.network.neutron [req-5dcca5cd-7f02-45dc-ba3b-182e33946fdc req-b464a9a8-275d-434c-964a-176756a99106 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: b6b22803-169f-45be-85f7-058bfa3f2970] Updated VIF entry in instance network info cache for port 05122117-0522-4844-80d6-4425d6fae978. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  1 09:45:46 compute-0 nova_compute[189491]: 2025-12-01 09:45:46.028 189495 DEBUG nova.network.neutron [req-5dcca5cd-7f02-45dc-ba3b-182e33946fdc req-b464a9a8-275d-434c-964a-176756a99106 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: b6b22803-169f-45be-85f7-058bfa3f2970] Updating instance_info_cache with network_info: [{"id": "05122117-0522-4844-80d6-4425d6fae978", "address": "fa:16:3e:af:65:c9", "network": {"id": "9a42964e-1108-49cc-ac3f-41165766e2ed", "bridge": "br-int", "label": "tempest-TestServerBasicOps-201869635-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db1d07a763fd4c1d806a7cf648ffae54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05122117-05", "ovs_interfaceid": "05122117-0522-4844-80d6-4425d6fae978", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 09:45:46 compute-0 nova_compute[189491]: 2025-12-01 09:45:46.045 189495 DEBUG oslo_concurrency.lockutils [req-5dcca5cd-7f02-45dc-ba3b-182e33946fdc req-b464a9a8-275d-434c-964a-176756a99106 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Releasing lock "refresh_cache-b6b22803-169f-45be-85f7-058bfa3f2970" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 09:45:46 compute-0 podman[254554]: 2025-12-01 09:45:46.149276904 +0000 UTC m=+0.077730929 container create 6ea0356d09770beedba0b32e9ab16b2b6ec629cc69571297bd28fdb8293639b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9a42964e-1108-49cc-ac3f-41165766e2ed, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec  1 09:45:46 compute-0 podman[254554]: 2025-12-01 09:45:46.111272896 +0000 UTC m=+0.039726921 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  1 09:45:46 compute-0 systemd[1]: Started libpod-conmon-6ea0356d09770beedba0b32e9ab16b2b6ec629cc69571297bd28fdb8293639b1.scope.
Dec  1 09:45:46 compute-0 systemd[1]: Started libcrun container.
Dec  1 09:45:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4585a309b274e2655d189ab45ac5994ff00c1bfaab1a29917668f2d82e03a91/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  1 09:45:46 compute-0 podman[254554]: 2025-12-01 09:45:46.269756306 +0000 UTC m=+0.198210351 container init 6ea0356d09770beedba0b32e9ab16b2b6ec629cc69571297bd28fdb8293639b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9a42964e-1108-49cc-ac3f-41165766e2ed, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec  1 09:45:46 compute-0 podman[254554]: 2025-12-01 09:45:46.278618232 +0000 UTC m=+0.207072257 container start 6ea0356d09770beedba0b32e9ab16b2b6ec629cc69571297bd28fdb8293639b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9a42964e-1108-49cc-ac3f-41165766e2ed, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec  1 09:45:46 compute-0 neutron-haproxy-ovnmeta-9a42964e-1108-49cc-ac3f-41165766e2ed[254569]: [NOTICE]   (254573) : New worker (254575) forked
Dec  1 09:45:46 compute-0 neutron-haproxy-ovnmeta-9a42964e-1108-49cc-ac3f-41165766e2ed[254569]: [NOTICE]   (254573) : Loading success.
Dec  1 09:45:47 compute-0 nova_compute[189491]: 2025-12-01 09:45:47.618 189495 DEBUG nova.compute.manager [req-2637b98e-a61e-4b36-a248-b687dd71a590 req-31a8f627-dd68-4e37-8485-2d85e235708f ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: b6b22803-169f-45be-85f7-058bfa3f2970] Received event network-vif-plugged-05122117-0522-4844-80d6-4425d6fae978 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 09:45:47 compute-0 nova_compute[189491]: 2025-12-01 09:45:47.620 189495 DEBUG oslo_concurrency.lockutils [req-2637b98e-a61e-4b36-a248-b687dd71a590 req-31a8f627-dd68-4e37-8485-2d85e235708f ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Acquiring lock "b6b22803-169f-45be-85f7-058bfa3f2970-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:45:47 compute-0 nova_compute[189491]: 2025-12-01 09:45:47.622 189495 DEBUG oslo_concurrency.lockutils [req-2637b98e-a61e-4b36-a248-b687dd71a590 req-31a8f627-dd68-4e37-8485-2d85e235708f ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Lock "b6b22803-169f-45be-85f7-058bfa3f2970-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:45:47 compute-0 nova_compute[189491]: 2025-12-01 09:45:47.623 189495 DEBUG oslo_concurrency.lockutils [req-2637b98e-a61e-4b36-a248-b687dd71a590 req-31a8f627-dd68-4e37-8485-2d85e235708f ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Lock "b6b22803-169f-45be-85f7-058bfa3f2970-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:45:47 compute-0 nova_compute[189491]: 2025-12-01 09:45:47.624 189495 DEBUG nova.compute.manager [req-2637b98e-a61e-4b36-a248-b687dd71a590 req-31a8f627-dd68-4e37-8485-2d85e235708f ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: b6b22803-169f-45be-85f7-058bfa3f2970] No waiting events found dispatching network-vif-plugged-05122117-0522-4844-80d6-4425d6fae978 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 09:45:47 compute-0 nova_compute[189491]: 2025-12-01 09:45:47.625 189495 WARNING nova.compute.manager [req-2637b98e-a61e-4b36-a248-b687dd71a590 req-31a8f627-dd68-4e37-8485-2d85e235708f ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: b6b22803-169f-45be-85f7-058bfa3f2970] Received unexpected event network-vif-plugged-05122117-0522-4844-80d6-4425d6fae978 for instance with vm_state active and task_state None.#033[00m
Dec  1 09:45:47 compute-0 nova_compute[189491]: 2025-12-01 09:45:47.713 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:45:47 compute-0 nova_compute[189491]: 2025-12-01 09:45:47.747 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:45:47 compute-0 nova_compute[189491]: 2025-12-01 09:45:47.748 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:45:47 compute-0 nova_compute[189491]: 2025-12-01 09:45:47.748 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:45:47 compute-0 nova_compute[189491]: 2025-12-01 09:45:47.749 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 09:45:47 compute-0 nova_compute[189491]: 2025-12-01 09:45:47.863 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7535b6dd-3ef8-4847-812d-f0a9208df287/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:45:47 compute-0 nova_compute[189491]: 2025-12-01 09:45:47.937 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7535b6dd-3ef8-4847-812d-f0a9208df287/disk --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:45:47 compute-0 nova_compute[189491]: 2025-12-01 09:45:47.949 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7535b6dd-3ef8-4847-812d-f0a9208df287/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:45:48 compute-0 nova_compute[189491]: 2025-12-01 09:45:48.027 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7535b6dd-3ef8-4847-812d-f0a9208df287/disk --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:45:48 compute-0 nova_compute[189491]: 2025-12-01 09:45:48.036 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/70f48496-14bd-4e6f-8706-262d8e6b9510/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:45:48 compute-0 nova_compute[189491]: 2025-12-01 09:45:48.097 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/70f48496-14bd-4e6f-8706-262d8e6b9510/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:45:48 compute-0 nova_compute[189491]: 2025-12-01 09:45:48.099 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/70f48496-14bd-4e6f-8706-262d8e6b9510/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:45:48 compute-0 nova_compute[189491]: 2025-12-01 09:45:48.185 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:45:48 compute-0 nova_compute[189491]: 2025-12-01 09:45:48.200 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/70f48496-14bd-4e6f-8706-262d8e6b9510/disk --force-share --output=json" returned: 0 in 0.102s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:45:48 compute-0 nova_compute[189491]: 2025-12-01 09:45:48.215 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:45:48 compute-0 nova_compute[189491]: 2025-12-01 09:45:48.284 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:45:48 compute-0 nova_compute[189491]: 2025-12-01 09:45:48.285 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:45:48 compute-0 nova_compute[189491]: 2025-12-01 09:45:48.351 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:45:48 compute-0 nova_compute[189491]: 2025-12-01 09:45:48.360 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b22803-169f-45be-85f7-058bfa3f2970/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:45:48 compute-0 nova_compute[189491]: 2025-12-01 09:45:48.429 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b22803-169f-45be-85f7-058bfa3f2970/disk --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:45:48 compute-0 nova_compute[189491]: 2025-12-01 09:45:48.431 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b22803-169f-45be-85f7-058bfa3f2970/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:45:48 compute-0 nova_compute[189491]: 2025-12-01 09:45:48.531 189495 DEBUG nova.compute.manager [req-6bbeebfd-cf9a-48c0-94c2-d76192b01018 req-3eb15966-524f-4a69-b92d-e749f2cf13e0 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: b6b22803-169f-45be-85f7-058bfa3f2970] Received event network-changed-05122117-0522-4844-80d6-4425d6fae978 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 09:45:48 compute-0 nova_compute[189491]: 2025-12-01 09:45:48.532 189495 DEBUG nova.compute.manager [req-6bbeebfd-cf9a-48c0-94c2-d76192b01018 req-3eb15966-524f-4a69-b92d-e749f2cf13e0 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: b6b22803-169f-45be-85f7-058bfa3f2970] Refreshing instance network info cache due to event network-changed-05122117-0522-4844-80d6-4425d6fae978. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  1 09:45:48 compute-0 nova_compute[189491]: 2025-12-01 09:45:48.533 189495 DEBUG oslo_concurrency.lockutils [req-6bbeebfd-cf9a-48c0-94c2-d76192b01018 req-3eb15966-524f-4a69-b92d-e749f2cf13e0 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Acquiring lock "refresh_cache-b6b22803-169f-45be-85f7-058bfa3f2970" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 09:45:48 compute-0 nova_compute[189491]: 2025-12-01 09:45:48.533 189495 DEBUG oslo_concurrency.lockutils [req-6bbeebfd-cf9a-48c0-94c2-d76192b01018 req-3eb15966-524f-4a69-b92d-e749f2cf13e0 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Acquired lock "refresh_cache-b6b22803-169f-45be-85f7-058bfa3f2970" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 09:45:48 compute-0 nova_compute[189491]: 2025-12-01 09:45:48.534 189495 DEBUG nova.network.neutron [req-6bbeebfd-cf9a-48c0-94c2-d76192b01018 req-3eb15966-524f-4a69-b92d-e749f2cf13e0 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: b6b22803-169f-45be-85f7-058bfa3f2970] Refreshing network info cache for port 05122117-0522-4844-80d6-4425d6fae978 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  1 09:45:48 compute-0 nova_compute[189491]: 2025-12-01 09:45:48.539 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b22803-169f-45be-85f7-058bfa3f2970/disk --force-share --output=json" returned: 0 in 0.109s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:45:49 compute-0 nova_compute[189491]: 2025-12-01 09:45:49.076 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:45:49 compute-0 nova_compute[189491]: 2025-12-01 09:45:49.102 189495 WARNING nova.virt.libvirt.driver [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 09:45:49 compute-0 nova_compute[189491]: 2025-12-01 09:45:49.104 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4778MB free_disk=72.24681854248047GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 09:45:49 compute-0 nova_compute[189491]: 2025-12-01 09:45:49.104 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:45:49 compute-0 nova_compute[189491]: 2025-12-01 09:45:49.104 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:45:49 compute-0 nova_compute[189491]: 2025-12-01 09:45:49.233 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Instance 70f48496-14bd-4e6f-8706-262d8e6b9510 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 09:45:49 compute-0 nova_compute[189491]: 2025-12-01 09:45:49.234 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Instance dc0d510c-4baf-4bcb-ab4f-de6ee48849c0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 09:45:49 compute-0 nova_compute[189491]: 2025-12-01 09:45:49.234 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Instance 7535b6dd-3ef8-4847-812d-f0a9208df287 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 09:45:49 compute-0 nova_compute[189491]: 2025-12-01 09:45:49.234 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Instance b6b22803-169f-45be-85f7-058bfa3f2970 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 09:45:49 compute-0 nova_compute[189491]: 2025-12-01 09:45:49.234 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 09:45:49 compute-0 nova_compute[189491]: 2025-12-01 09:45:49.234 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1024MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 09:45:49 compute-0 nova_compute[189491]: 2025-12-01 09:45:49.359 189495 DEBUG nova.compute.provider_tree [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Inventory has not changed in ProviderTree for provider: 143c7fe7-af1f-477a-978c-6a994d785d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 09:45:49 compute-0 nova_compute[189491]: 2025-12-01 09:45:49.379 189495 DEBUG nova.scheduler.client.report [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Inventory has not changed for provider 143c7fe7-af1f-477a-978c-6a994d785d98 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 09:45:49 compute-0 nova_compute[189491]: 2025-12-01 09:45:49.411 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 09:45:49 compute-0 nova_compute[189491]: 2025-12-01 09:45:49.411 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.307s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:45:50 compute-0 nova_compute[189491]: 2025-12-01 09:45:50.209 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:45:50 compute-0 nova_compute[189491]: 2025-12-01 09:45:50.266 189495 DEBUG nova.network.neutron [req-6bbeebfd-cf9a-48c0-94c2-d76192b01018 req-3eb15966-524f-4a69-b92d-e749f2cf13e0 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: b6b22803-169f-45be-85f7-058bfa3f2970] Updated VIF entry in instance network info cache for port 05122117-0522-4844-80d6-4425d6fae978. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  1 09:45:50 compute-0 nova_compute[189491]: 2025-12-01 09:45:50.266 189495 DEBUG nova.network.neutron [req-6bbeebfd-cf9a-48c0-94c2-d76192b01018 req-3eb15966-524f-4a69-b92d-e749f2cf13e0 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: b6b22803-169f-45be-85f7-058bfa3f2970] Updating instance_info_cache with network_info: [{"id": "05122117-0522-4844-80d6-4425d6fae978", "address": "fa:16:3e:af:65:c9", "network": {"id": "9a42964e-1108-49cc-ac3f-41165766e2ed", "bridge": "br-int", "label": "tempest-TestServerBasicOps-201869635-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db1d07a763fd4c1d806a7cf648ffae54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05122117-05", "ovs_interfaceid": "05122117-0522-4844-80d6-4425d6fae978", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 09:45:50 compute-0 nova_compute[189491]: 2025-12-01 09:45:50.288 189495 DEBUG oslo_concurrency.lockutils [req-6bbeebfd-cf9a-48c0-94c2-d76192b01018 req-3eb15966-524f-4a69-b92d-e749f2cf13e0 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Releasing lock "refresh_cache-b6b22803-169f-45be-85f7-058bfa3f2970" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 09:45:50 compute-0 ovn_controller[97794]: 2025-12-01T09:45:50Z|00140|binding|INFO|Releasing lport a52d5841-c07f-4d57-abbb-5b84c6008243 from this chassis (sb_readonly=0)
Dec  1 09:45:50 compute-0 ovn_controller[97794]: 2025-12-01T09:45:50Z|00141|binding|INFO|Releasing lport 6265634a-8973-4de4-bd20-6e57721ad464 from this chassis (sb_readonly=0)
Dec  1 09:45:50 compute-0 ovn_controller[97794]: 2025-12-01T09:45:50Z|00142|binding|INFO|Releasing lport 7159c06b-520e-4157-9235-0b4ddbac66cf from this chassis (sb_readonly=0)
Dec  1 09:45:50 compute-0 nova_compute[189491]: 2025-12-01 09:45:50.550 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:45:51 compute-0 nova_compute[189491]: 2025-12-01 09:45:51.412 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:45:51 compute-0 nova_compute[189491]: 2025-12-01 09:45:51.413 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:45:51 compute-0 nova_compute[189491]: 2025-12-01 09:45:51.413 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 09:45:51 compute-0 nova_compute[189491]: 2025-12-01 09:45:51.708 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:45:51 compute-0 nova_compute[189491]: 2025-12-01 09:45:51.713 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:45:52 compute-0 nova_compute[189491]: 2025-12-01 09:45:52.714 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:45:52 compute-0 nova_compute[189491]: 2025-12-01 09:45:52.716 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:45:53 compute-0 nova_compute[189491]: 2025-12-01 09:45:53.188 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:45:54 compute-0 nova_compute[189491]: 2025-12-01 09:45:54.081 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:45:54 compute-0 nova_compute[189491]: 2025-12-01 09:45:54.343 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:45:54 compute-0 podman[254610]: 2025-12-01 09:45:54.718466714 +0000 UTC m=+0.084018333 container health_status 6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  1 09:45:54 compute-0 podman[254611]: 2025-12-01 09:45:54.740867591 +0000 UTC m=+0.096325143 container health_status ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  1 09:45:55 compute-0 nova_compute[189491]: 2025-12-01 09:45:55.139 189495 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764582340.135538, b5a25e93-8e59-4459-a45e-2d1d2d486bbc => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 09:45:55 compute-0 nova_compute[189491]: 2025-12-01 09:45:55.140 189495 INFO nova.compute.manager [-] [instance: b5a25e93-8e59-4459-a45e-2d1d2d486bbc] VM Stopped (Lifecycle Event)#033[00m
Dec  1 09:45:55 compute-0 nova_compute[189491]: 2025-12-01 09:45:55.313 189495 DEBUG nova.compute.manager [None req-be1066aa-f559-4683-b5bd-2bba89126b76 - - - - - -] [instance: b5a25e93-8e59-4459-a45e-2d1d2d486bbc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 09:45:57 compute-0 nova_compute[189491]: 2025-12-01 09:45:57.911 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:45:58 compute-0 nova_compute[189491]: 2025-12-01 09:45:58.192 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:45:58 compute-0 nova_compute[189491]: 2025-12-01 09:45:58.711 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:45:59 compute-0 nova_compute[189491]: 2025-12-01 09:45:59.085 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:45:59 compute-0 podman[203700]: time="2025-12-01T09:45:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 09:45:59 compute-0 podman[203700]: @ - - [01/Dec/2025:09:45:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 31990 "" "Go-http-client/1.1"
Dec  1 09:45:59 compute-0 podman[203700]: @ - - [01/Dec/2025:09:45:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5733 "" "Go-http-client/1.1"
Dec  1 09:46:00 compute-0 ovn_controller[97794]: 2025-12-01T09:46:00Z|00143|binding|INFO|Releasing lport a52d5841-c07f-4d57-abbb-5b84c6008243 from this chassis (sb_readonly=0)
Dec  1 09:46:00 compute-0 ovn_controller[97794]: 2025-12-01T09:46:00Z|00144|binding|INFO|Releasing lport 6265634a-8973-4de4-bd20-6e57721ad464 from this chassis (sb_readonly=0)
Dec  1 09:46:00 compute-0 ovn_controller[97794]: 2025-12-01T09:46:00Z|00145|binding|INFO|Releasing lport 7159c06b-520e-4157-9235-0b4ddbac66cf from this chassis (sb_readonly=0)
Dec  1 09:46:00 compute-0 nova_compute[189491]: 2025-12-01 09:46:00.269 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:46:01 compute-0 openstack_network_exporter[205866]: ERROR   09:46:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:46:01 compute-0 openstack_network_exporter[205866]: ERROR   09:46:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:46:01 compute-0 openstack_network_exporter[205866]: ERROR   09:46:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 09:46:01 compute-0 openstack_network_exporter[205866]: ERROR   09:46:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 09:46:01 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:46:01 compute-0 openstack_network_exporter[205866]: ERROR   09:46:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 09:46:01 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:46:02 compute-0 podman[254651]: 2025-12-01 09:46:02.699558113 +0000 UTC m=+0.075754611 container health_status e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec  1 09:46:03 compute-0 nova_compute[189491]: 2025-12-01 09:46:03.194 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:46:03 compute-0 podman[254673]: 2025-12-01 09:46:03.707423564 +0000 UTC m=+0.078807625 container health_status f2d8639519b361376780abb83cb5a5e341c4f412720fb5852b2a4d261a69c359 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, architecture=x86_64, name=ubi9, vendor=Red Hat, Inc., container_name=kepler, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, distribution-scope=public, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, release=1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, com.redhat.component=ubi9-container, vcs-type=git, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Dec  1 09:46:03 compute-0 podman[254672]: 2025-12-01 09:46:03.73057411 +0000 UTC m=+0.106665876 container health_status dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  1 09:46:04 compute-0 nova_compute[189491]: 2025-12-01 09:46:04.089 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:46:05 compute-0 nova_compute[189491]: 2025-12-01 09:46:05.338 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:46:05 compute-0 ovn_controller[97794]: 2025-12-01T09:46:05Z|00021|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:8c:34:1f 10.100.0.6
Dec  1 09:46:05 compute-0 ovn_controller[97794]: 2025-12-01T09:46:05Z|00022|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:8c:34:1f 10.100.0.6
Dec  1 09:46:08 compute-0 nova_compute[189491]: 2025-12-01 09:46:08.148 189495 DEBUG oslo_concurrency.lockutils [None req-030132bd-af17-4c05-aea2-f41a5e1812b3 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] Acquiring lock "4070cce8-ccf0-4909-8358-9924882ce843" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:46:08 compute-0 nova_compute[189491]: 2025-12-01 09:46:08.149 189495 DEBUG oslo_concurrency.lockutils [None req-030132bd-af17-4c05-aea2-f41a5e1812b3 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] Lock "4070cce8-ccf0-4909-8358-9924882ce843" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:46:08 compute-0 nova_compute[189491]: 2025-12-01 09:46:08.165 189495 DEBUG nova.compute.manager [None req-030132bd-af17-4c05-aea2-f41a5e1812b3 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] [instance: 4070cce8-ccf0-4909-8358-9924882ce843] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  1 09:46:08 compute-0 nova_compute[189491]: 2025-12-01 09:46:08.197 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:46:08 compute-0 nova_compute[189491]: 2025-12-01 09:46:08.235 189495 DEBUG oslo_concurrency.lockutils [None req-030132bd-af17-4c05-aea2-f41a5e1812b3 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:46:08 compute-0 nova_compute[189491]: 2025-12-01 09:46:08.236 189495 DEBUG oslo_concurrency.lockutils [None req-030132bd-af17-4c05-aea2-f41a5e1812b3 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:46:08 compute-0 nova_compute[189491]: 2025-12-01 09:46:08.247 189495 DEBUG nova.virt.hardware [None req-030132bd-af17-4c05-aea2-f41a5e1812b3 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  1 09:46:08 compute-0 nova_compute[189491]: 2025-12-01 09:46:08.247 189495 INFO nova.compute.claims [None req-030132bd-af17-4c05-aea2-f41a5e1812b3 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] [instance: 4070cce8-ccf0-4909-8358-9924882ce843] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  1 09:46:08 compute-0 nova_compute[189491]: 2025-12-01 09:46:08.433 189495 DEBUG nova.compute.provider_tree [None req-030132bd-af17-4c05-aea2-f41a5e1812b3 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] Inventory has not changed in ProviderTree for provider: 143c7fe7-af1f-477a-978c-6a994d785d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 09:46:08 compute-0 nova_compute[189491]: 2025-12-01 09:46:08.447 189495 DEBUG nova.scheduler.client.report [None req-030132bd-af17-4c05-aea2-f41a5e1812b3 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] Inventory has not changed for provider 143c7fe7-af1f-477a-978c-6a994d785d98 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 09:46:08 compute-0 nova_compute[189491]: 2025-12-01 09:46:08.865 189495 DEBUG oslo_concurrency.lockutils [None req-030132bd-af17-4c05-aea2-f41a5e1812b3 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.629s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:46:08 compute-0 nova_compute[189491]: 2025-12-01 09:46:08.866 189495 DEBUG nova.compute.manager [None req-030132bd-af17-4c05-aea2-f41a5e1812b3 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] [instance: 4070cce8-ccf0-4909-8358-9924882ce843] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  1 09:46:08 compute-0 nova_compute[189491]: 2025-12-01 09:46:08.931 189495 DEBUG nova.compute.manager [None req-030132bd-af17-4c05-aea2-f41a5e1812b3 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] [instance: 4070cce8-ccf0-4909-8358-9924882ce843] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  1 09:46:08 compute-0 nova_compute[189491]: 2025-12-01 09:46:08.932 189495 DEBUG nova.network.neutron [None req-030132bd-af17-4c05-aea2-f41a5e1812b3 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] [instance: 4070cce8-ccf0-4909-8358-9924882ce843] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  1 09:46:09 compute-0 nova_compute[189491]: 2025-12-01 09:46:09.091 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:46:09 compute-0 nova_compute[189491]: 2025-12-01 09:46:09.096 189495 INFO nova.virt.libvirt.driver [None req-030132bd-af17-4c05-aea2-f41a5e1812b3 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] [instance: 4070cce8-ccf0-4909-8358-9924882ce843] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  1 09:46:09 compute-0 nova_compute[189491]: 2025-12-01 09:46:09.118 189495 DEBUG nova.policy [None req-030132bd-af17-4c05-aea2-f41a5e1812b3 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd64b3ffc20d34dd5af4018e4ea24dabd', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0bf37d9996bf440eb3bc55aa221d0ae6', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  1 09:46:09 compute-0 nova_compute[189491]: 2025-12-01 09:46:09.123 189495 DEBUG nova.compute.manager [None req-030132bd-af17-4c05-aea2-f41a5e1812b3 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] [instance: 4070cce8-ccf0-4909-8358-9924882ce843] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  1 09:46:09 compute-0 nova_compute[189491]: 2025-12-01 09:46:09.223 189495 DEBUG nova.compute.manager [None req-030132bd-af17-4c05-aea2-f41a5e1812b3 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] [instance: 4070cce8-ccf0-4909-8358-9924882ce843] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  1 09:46:09 compute-0 nova_compute[189491]: 2025-12-01 09:46:09.225 189495 DEBUG nova.virt.libvirt.driver [None req-030132bd-af17-4c05-aea2-f41a5e1812b3 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] [instance: 4070cce8-ccf0-4909-8358-9924882ce843] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  1 09:46:09 compute-0 nova_compute[189491]: 2025-12-01 09:46:09.225 189495 INFO nova.virt.libvirt.driver [None req-030132bd-af17-4c05-aea2-f41a5e1812b3 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] [instance: 4070cce8-ccf0-4909-8358-9924882ce843] Creating image(s)#033[00m
Dec  1 09:46:09 compute-0 nova_compute[189491]: 2025-12-01 09:46:09.226 189495 DEBUG oslo_concurrency.lockutils [None req-030132bd-af17-4c05-aea2-f41a5e1812b3 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] Acquiring lock "/var/lib/nova/instances/4070cce8-ccf0-4909-8358-9924882ce843/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:46:09 compute-0 nova_compute[189491]: 2025-12-01 09:46:09.226 189495 DEBUG oslo_concurrency.lockutils [None req-030132bd-af17-4c05-aea2-f41a5e1812b3 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] Lock "/var/lib/nova/instances/4070cce8-ccf0-4909-8358-9924882ce843/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:46:09 compute-0 nova_compute[189491]: 2025-12-01 09:46:09.227 189495 DEBUG oslo_concurrency.lockutils [None req-030132bd-af17-4c05-aea2-f41a5e1812b3 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] Lock "/var/lib/nova/instances/4070cce8-ccf0-4909-8358-9924882ce843/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:46:09 compute-0 nova_compute[189491]: 2025-12-01 09:46:09.241 189495 DEBUG oslo_concurrency.processutils [None req-030132bd-af17-4c05-aea2-f41a5e1812b3 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bfffb0fe9ffb8885e11e7a8e92aeafe5ed4e87fd --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:46:09 compute-0 nova_compute[189491]: 2025-12-01 09:46:09.321 189495 DEBUG oslo_concurrency.processutils [None req-030132bd-af17-4c05-aea2-f41a5e1812b3 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bfffb0fe9ffb8885e11e7a8e92aeafe5ed4e87fd --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:46:09 compute-0 nova_compute[189491]: 2025-12-01 09:46:09.322 189495 DEBUG oslo_concurrency.lockutils [None req-030132bd-af17-4c05-aea2-f41a5e1812b3 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] Acquiring lock "bfffb0fe9ffb8885e11e7a8e92aeafe5ed4e87fd" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:46:09 compute-0 nova_compute[189491]: 2025-12-01 09:46:09.323 189495 DEBUG oslo_concurrency.lockutils [None req-030132bd-af17-4c05-aea2-f41a5e1812b3 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] Lock "bfffb0fe9ffb8885e11e7a8e92aeafe5ed4e87fd" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:46:09 compute-0 nova_compute[189491]: 2025-12-01 09:46:09.341 189495 DEBUG oslo_concurrency.processutils [None req-030132bd-af17-4c05-aea2-f41a5e1812b3 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bfffb0fe9ffb8885e11e7a8e92aeafe5ed4e87fd --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:46:09 compute-0 nova_compute[189491]: 2025-12-01 09:46:09.405 189495 DEBUG oslo_concurrency.processutils [None req-030132bd-af17-4c05-aea2-f41a5e1812b3 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bfffb0fe9ffb8885e11e7a8e92aeafe5ed4e87fd --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:46:09 compute-0 nova_compute[189491]: 2025-12-01 09:46:09.406 189495 DEBUG oslo_concurrency.processutils [None req-030132bd-af17-4c05-aea2-f41a5e1812b3 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/bfffb0fe9ffb8885e11e7a8e92aeafe5ed4e87fd,backing_fmt=raw /var/lib/nova/instances/4070cce8-ccf0-4909-8358-9924882ce843/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:46:09 compute-0 nova_compute[189491]: 2025-12-01 09:46:09.447 189495 DEBUG oslo_concurrency.processutils [None req-030132bd-af17-4c05-aea2-f41a5e1812b3 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/bfffb0fe9ffb8885e11e7a8e92aeafe5ed4e87fd,backing_fmt=raw /var/lib/nova/instances/4070cce8-ccf0-4909-8358-9924882ce843/disk 1073741824" returned: 0 in 0.041s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:46:09 compute-0 nova_compute[189491]: 2025-12-01 09:46:09.448 189495 DEBUG oslo_concurrency.lockutils [None req-030132bd-af17-4c05-aea2-f41a5e1812b3 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] Lock "bfffb0fe9ffb8885e11e7a8e92aeafe5ed4e87fd" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.125s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:46:09 compute-0 nova_compute[189491]: 2025-12-01 09:46:09.449 189495 DEBUG oslo_concurrency.processutils [None req-030132bd-af17-4c05-aea2-f41a5e1812b3 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bfffb0fe9ffb8885e11e7a8e92aeafe5ed4e87fd --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:46:09 compute-0 nova_compute[189491]: 2025-12-01 09:46:09.512 189495 DEBUG oslo_concurrency.processutils [None req-030132bd-af17-4c05-aea2-f41a5e1812b3 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bfffb0fe9ffb8885e11e7a8e92aeafe5ed4e87fd --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:46:09 compute-0 nova_compute[189491]: 2025-12-01 09:46:09.514 189495 DEBUG nova.virt.disk.api [None req-030132bd-af17-4c05-aea2-f41a5e1812b3 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] Checking if we can resize image /var/lib/nova/instances/4070cce8-ccf0-4909-8358-9924882ce843/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Dec  1 09:46:09 compute-0 nova_compute[189491]: 2025-12-01 09:46:09.515 189495 DEBUG oslo_concurrency.processutils [None req-030132bd-af17-4c05-aea2-f41a5e1812b3 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4070cce8-ccf0-4909-8358-9924882ce843/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:46:09 compute-0 nova_compute[189491]: 2025-12-01 09:46:09.579 189495 DEBUG oslo_concurrency.processutils [None req-030132bd-af17-4c05-aea2-f41a5e1812b3 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4070cce8-ccf0-4909-8358-9924882ce843/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:46:09 compute-0 nova_compute[189491]: 2025-12-01 09:46:09.580 189495 DEBUG nova.virt.disk.api [None req-030132bd-af17-4c05-aea2-f41a5e1812b3 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] Cannot resize image /var/lib/nova/instances/4070cce8-ccf0-4909-8358-9924882ce843/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Dec  1 09:46:09 compute-0 nova_compute[189491]: 2025-12-01 09:46:09.581 189495 DEBUG nova.objects.instance [None req-030132bd-af17-4c05-aea2-f41a5e1812b3 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] Lazy-loading 'migration_context' on Instance uuid 4070cce8-ccf0-4909-8358-9924882ce843 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 09:46:09 compute-0 nova_compute[189491]: 2025-12-01 09:46:09.600 189495 DEBUG nova.virt.libvirt.driver [None req-030132bd-af17-4c05-aea2-f41a5e1812b3 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] [instance: 4070cce8-ccf0-4909-8358-9924882ce843] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  1 09:46:09 compute-0 nova_compute[189491]: 2025-12-01 09:46:09.601 189495 DEBUG nova.virt.libvirt.driver [None req-030132bd-af17-4c05-aea2-f41a5e1812b3 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] [instance: 4070cce8-ccf0-4909-8358-9924882ce843] Ensure instance console log exists: /var/lib/nova/instances/4070cce8-ccf0-4909-8358-9924882ce843/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  1 09:46:09 compute-0 nova_compute[189491]: 2025-12-01 09:46:09.601 189495 DEBUG oslo_concurrency.lockutils [None req-030132bd-af17-4c05-aea2-f41a5e1812b3 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:46:09 compute-0 nova_compute[189491]: 2025-12-01 09:46:09.602 189495 DEBUG oslo_concurrency.lockutils [None req-030132bd-af17-4c05-aea2-f41a5e1812b3 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:46:09 compute-0 nova_compute[189491]: 2025-12-01 09:46:09.602 189495 DEBUG oslo_concurrency.lockutils [None req-030132bd-af17-4c05-aea2-f41a5e1812b3 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:46:09 compute-0 nova_compute[189491]: 2025-12-01 09:46:09.647 189495 DEBUG nova.network.neutron [None req-030132bd-af17-4c05-aea2-f41a5e1812b3 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] [instance: 4070cce8-ccf0-4909-8358-9924882ce843] Successfully created port: 993e74c8-435c-4af8-8267-003c237479c4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  1 09:46:10 compute-0 nova_compute[189491]: 2025-12-01 09:46:10.467 189495 DEBUG nova.network.neutron [None req-030132bd-af17-4c05-aea2-f41a5e1812b3 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] [instance: 4070cce8-ccf0-4909-8358-9924882ce843] Successfully updated port: 993e74c8-435c-4af8-8267-003c237479c4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  1 09:46:10 compute-0 nova_compute[189491]: 2025-12-01 09:46:10.486 189495 DEBUG oslo_concurrency.lockutils [None req-030132bd-af17-4c05-aea2-f41a5e1812b3 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] Acquiring lock "refresh_cache-4070cce8-ccf0-4909-8358-9924882ce843" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 09:46:10 compute-0 nova_compute[189491]: 2025-12-01 09:46:10.487 189495 DEBUG oslo_concurrency.lockutils [None req-030132bd-af17-4c05-aea2-f41a5e1812b3 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] Acquired lock "refresh_cache-4070cce8-ccf0-4909-8358-9924882ce843" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 09:46:10 compute-0 nova_compute[189491]: 2025-12-01 09:46:10.487 189495 DEBUG nova.network.neutron [None req-030132bd-af17-4c05-aea2-f41a5e1812b3 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] [instance: 4070cce8-ccf0-4909-8358-9924882ce843] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  1 09:46:10 compute-0 nova_compute[189491]: 2025-12-01 09:46:10.570 189495 DEBUG nova.compute.manager [req-d0263f68-242f-4a97-bf12-767bddb05e0f req-8657e4b4-8c7c-4a24-8ed3-2b15822314f9 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 4070cce8-ccf0-4909-8358-9924882ce843] Received event network-changed-993e74c8-435c-4af8-8267-003c237479c4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 09:46:10 compute-0 nova_compute[189491]: 2025-12-01 09:46:10.570 189495 DEBUG nova.compute.manager [req-d0263f68-242f-4a97-bf12-767bddb05e0f req-8657e4b4-8c7c-4a24-8ed3-2b15822314f9 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 4070cce8-ccf0-4909-8358-9924882ce843] Refreshing instance network info cache due to event network-changed-993e74c8-435c-4af8-8267-003c237479c4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  1 09:46:10 compute-0 nova_compute[189491]: 2025-12-01 09:46:10.571 189495 DEBUG oslo_concurrency.lockutils [req-d0263f68-242f-4a97-bf12-767bddb05e0f req-8657e4b4-8c7c-4a24-8ed3-2b15822314f9 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Acquiring lock "refresh_cache-4070cce8-ccf0-4909-8358-9924882ce843" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 09:46:10 compute-0 nova_compute[189491]: 2025-12-01 09:46:10.626 189495 DEBUG nova.network.neutron [None req-030132bd-af17-4c05-aea2-f41a5e1812b3 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] [instance: 4070cce8-ccf0-4909-8358-9924882ce843] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  1 09:46:10 compute-0 podman[254752]: 2025-12-01 09:46:10.694252175 +0000 UTC m=+0.063867160 container health_status f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 09:46:10 compute-0 podman[254751]: 2025-12-01 09:46:10.703411039 +0000 UTC m=+0.082191988 container health_status 110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, config_id=edpm, managed_by=edpm_ansible, release=1755695350, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Dec  1 09:46:11 compute-0 nova_compute[189491]: 2025-12-01 09:46:11.490 189495 DEBUG nova.network.neutron [None req-030132bd-af17-4c05-aea2-f41a5e1812b3 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] [instance: 4070cce8-ccf0-4909-8358-9924882ce843] Updating instance_info_cache with network_info: [{"id": "993e74c8-435c-4af8-8267-003c237479c4", "address": "fa:16:3e:7e:56:91", "network": {"id": "47f1cdb6-d949-499b-a4e6-73d3741aa9be", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-594371780-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bf37d9996bf440eb3bc55aa221d0ae6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap993e74c8-43", "ovs_interfaceid": "993e74c8-435c-4af8-8267-003c237479c4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 09:46:11 compute-0 nova_compute[189491]: 2025-12-01 09:46:11.507 189495 DEBUG oslo_concurrency.lockutils [None req-030132bd-af17-4c05-aea2-f41a5e1812b3 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] Releasing lock "refresh_cache-4070cce8-ccf0-4909-8358-9924882ce843" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 09:46:11 compute-0 nova_compute[189491]: 2025-12-01 09:46:11.507 189495 DEBUG nova.compute.manager [None req-030132bd-af17-4c05-aea2-f41a5e1812b3 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] [instance: 4070cce8-ccf0-4909-8358-9924882ce843] Instance network_info: |[{"id": "993e74c8-435c-4af8-8267-003c237479c4", "address": "fa:16:3e:7e:56:91", "network": {"id": "47f1cdb6-d949-499b-a4e6-73d3741aa9be", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-594371780-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bf37d9996bf440eb3bc55aa221d0ae6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap993e74c8-43", "ovs_interfaceid": "993e74c8-435c-4af8-8267-003c237479c4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  1 09:46:11 compute-0 nova_compute[189491]: 2025-12-01 09:46:11.508 189495 DEBUG oslo_concurrency.lockutils [req-d0263f68-242f-4a97-bf12-767bddb05e0f req-8657e4b4-8c7c-4a24-8ed3-2b15822314f9 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Acquired lock "refresh_cache-4070cce8-ccf0-4909-8358-9924882ce843" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 09:46:11 compute-0 nova_compute[189491]: 2025-12-01 09:46:11.508 189495 DEBUG nova.network.neutron [req-d0263f68-242f-4a97-bf12-767bddb05e0f req-8657e4b4-8c7c-4a24-8ed3-2b15822314f9 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 4070cce8-ccf0-4909-8358-9924882ce843] Refreshing network info cache for port 993e74c8-435c-4af8-8267-003c237479c4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  1 09:46:11 compute-0 nova_compute[189491]: 2025-12-01 09:46:11.511 189495 DEBUG nova.virt.libvirt.driver [None req-030132bd-af17-4c05-aea2-f41a5e1812b3 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] [instance: 4070cce8-ccf0-4909-8358-9924882ce843] Start _get_guest_xml network_info=[{"id": "993e74c8-435c-4af8-8267-003c237479c4", "address": "fa:16:3e:7e:56:91", "network": {"id": "47f1cdb6-d949-499b-a4e6-73d3741aa9be", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-594371780-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bf37d9996bf440eb3bc55aa221d0ae6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap993e74c8-43", "ovs_interfaceid": "993e74c8-435c-4af8-8267-003c237479c4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-01T09:41:33Z,direct_url=<?>,disk_format='qcow2',id=7ddeffd1-d06f-4a46-9e41-114974daa90e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='fac95b8a995a4174bfa966a8d9d9aa01',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-01T09:41:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'guest_format': None, 'encryption_options': None, 'device_type': 'disk', 'encryption_secret_uuid': None, 'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'image_id': '7ddeffd1-d06f-4a46-9e41-114974daa90e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  1 09:46:11 compute-0 nova_compute[189491]: 2025-12-01 09:46:11.518 189495 WARNING nova.virt.libvirt.driver [None req-030132bd-af17-4c05-aea2-f41a5e1812b3 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 09:46:11 compute-0 nova_compute[189491]: 2025-12-01 09:46:11.526 189495 DEBUG nova.virt.libvirt.host [None req-030132bd-af17-4c05-aea2-f41a5e1812b3 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  1 09:46:11 compute-0 nova_compute[189491]: 2025-12-01 09:46:11.528 189495 DEBUG nova.virt.libvirt.host [None req-030132bd-af17-4c05-aea2-f41a5e1812b3 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  1 09:46:11 compute-0 nova_compute[189491]: 2025-12-01 09:46:11.532 189495 DEBUG nova.virt.libvirt.host [None req-030132bd-af17-4c05-aea2-f41a5e1812b3 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  1 09:46:11 compute-0 nova_compute[189491]: 2025-12-01 09:46:11.533 189495 DEBUG nova.virt.libvirt.host [None req-030132bd-af17-4c05-aea2-f41a5e1812b3 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  1 09:46:11 compute-0 nova_compute[189491]: 2025-12-01 09:46:11.534 189495 DEBUG nova.virt.libvirt.driver [None req-030132bd-af17-4c05-aea2-f41a5e1812b3 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  1 09:46:11 compute-0 nova_compute[189491]: 2025-12-01 09:46:11.534 189495 DEBUG nova.virt.hardware [None req-030132bd-af17-4c05-aea2-f41a5e1812b3 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-01T09:41:32Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='422f041c-a187-4aa2-8167-37f3eb0e89c2',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-01T09:41:33Z,direct_url=<?>,disk_format='qcow2',id=7ddeffd1-d06f-4a46-9e41-114974daa90e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='fac95b8a995a4174bfa966a8d9d9aa01',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-01T09:41:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  1 09:46:11 compute-0 nova_compute[189491]: 2025-12-01 09:46:11.534 189495 DEBUG nova.virt.hardware [None req-030132bd-af17-4c05-aea2-f41a5e1812b3 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  1 09:46:11 compute-0 nova_compute[189491]: 2025-12-01 09:46:11.535 189495 DEBUG nova.virt.hardware [None req-030132bd-af17-4c05-aea2-f41a5e1812b3 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  1 09:46:11 compute-0 nova_compute[189491]: 2025-12-01 09:46:11.535 189495 DEBUG nova.virt.hardware [None req-030132bd-af17-4c05-aea2-f41a5e1812b3 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  1 09:46:11 compute-0 nova_compute[189491]: 2025-12-01 09:46:11.535 189495 DEBUG nova.virt.hardware [None req-030132bd-af17-4c05-aea2-f41a5e1812b3 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  1 09:46:11 compute-0 nova_compute[189491]: 2025-12-01 09:46:11.536 189495 DEBUG nova.virt.hardware [None req-030132bd-af17-4c05-aea2-f41a5e1812b3 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  1 09:46:11 compute-0 nova_compute[189491]: 2025-12-01 09:46:11.536 189495 DEBUG nova.virt.hardware [None req-030132bd-af17-4c05-aea2-f41a5e1812b3 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  1 09:46:11 compute-0 nova_compute[189491]: 2025-12-01 09:46:11.536 189495 DEBUG nova.virt.hardware [None req-030132bd-af17-4c05-aea2-f41a5e1812b3 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  1 09:46:11 compute-0 nova_compute[189491]: 2025-12-01 09:46:11.537 189495 DEBUG nova.virt.hardware [None req-030132bd-af17-4c05-aea2-f41a5e1812b3 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  1 09:46:11 compute-0 nova_compute[189491]: 2025-12-01 09:46:11.537 189495 DEBUG nova.virt.hardware [None req-030132bd-af17-4c05-aea2-f41a5e1812b3 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  1 09:46:11 compute-0 nova_compute[189491]: 2025-12-01 09:46:11.537 189495 DEBUG nova.virt.hardware [None req-030132bd-af17-4c05-aea2-f41a5e1812b3 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  1 09:46:11 compute-0 nova_compute[189491]: 2025-12-01 09:46:11.541 189495 DEBUG nova.virt.libvirt.vif [None req-030132bd-af17-4c05-aea2-f41a5e1812b3 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-01T09:46:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-656807043',display_name='tempest-ServerAddressesTestJSON-server-656807043',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-656807043',id=14,image_ref='7ddeffd1-d06f-4a46-9e41-114974daa90e',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0bf37d9996bf440eb3bc55aa221d0ae6',ramdisk_id='',reservation_id='r-6vueod18',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7ddeffd1-d06f-4a46-9e41-114974daa90e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesTestJSON-1747833979',owner_user_name='tempest-ServerAddressesTestJSON-1747833979-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-01T09:46:09Z,user_data=None,user_id='d64b3ffc20d34dd5af4018e4ea24dabd',uuid=4070cce8-ccf0-4909-8358-9924882ce843,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "993e74c8-435c-4af8-8267-003c237479c4", "address": "fa:16:3e:7e:56:91", "network": {"id": "47f1cdb6-d949-499b-a4e6-73d3741aa9be", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-594371780-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bf37d9996bf440eb3bc55aa221d0ae6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap993e74c8-43", "ovs_interfaceid": "993e74c8-435c-4af8-8267-003c237479c4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  1 09:46:11 compute-0 nova_compute[189491]: 2025-12-01 09:46:11.541 189495 DEBUG nova.network.os_vif_util [None req-030132bd-af17-4c05-aea2-f41a5e1812b3 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] Converting VIF {"id": "993e74c8-435c-4af8-8267-003c237479c4", "address": "fa:16:3e:7e:56:91", "network": {"id": "47f1cdb6-d949-499b-a4e6-73d3741aa9be", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-594371780-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bf37d9996bf440eb3bc55aa221d0ae6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap993e74c8-43", "ovs_interfaceid": "993e74c8-435c-4af8-8267-003c237479c4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 09:46:11 compute-0 nova_compute[189491]: 2025-12-01 09:46:11.542 189495 DEBUG nova.network.os_vif_util [None req-030132bd-af17-4c05-aea2-f41a5e1812b3 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7e:56:91,bridge_name='br-int',has_traffic_filtering=True,id=993e74c8-435c-4af8-8267-003c237479c4,network=Network(47f1cdb6-d949-499b-a4e6-73d3741aa9be),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap993e74c8-43') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 09:46:11 compute-0 nova_compute[189491]: 2025-12-01 09:46:11.543 189495 DEBUG nova.objects.instance [None req-030132bd-af17-4c05-aea2-f41a5e1812b3 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4070cce8-ccf0-4909-8358-9924882ce843 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 09:46:11 compute-0 nova_compute[189491]: 2025-12-01 09:46:11.555 189495 DEBUG nova.virt.libvirt.driver [None req-030132bd-af17-4c05-aea2-f41a5e1812b3 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] [instance: 4070cce8-ccf0-4909-8358-9924882ce843] End _get_guest_xml xml=<domain type="kvm">
Dec  1 09:46:11 compute-0 nova_compute[189491]:  <uuid>4070cce8-ccf0-4909-8358-9924882ce843</uuid>
Dec  1 09:46:11 compute-0 nova_compute[189491]:  <name>instance-0000000e</name>
Dec  1 09:46:11 compute-0 nova_compute[189491]:  <memory>131072</memory>
Dec  1 09:46:11 compute-0 nova_compute[189491]:  <vcpu>1</vcpu>
Dec  1 09:46:11 compute-0 nova_compute[189491]:  <metadata>
Dec  1 09:46:11 compute-0 nova_compute[189491]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  1 09:46:11 compute-0 nova_compute[189491]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  1 09:46:11 compute-0 nova_compute[189491]:      <nova:name>tempest-ServerAddressesTestJSON-server-656807043</nova:name>
Dec  1 09:46:11 compute-0 nova_compute[189491]:      <nova:creationTime>2025-12-01 09:46:11</nova:creationTime>
Dec  1 09:46:11 compute-0 nova_compute[189491]:      <nova:flavor name="m1.nano">
Dec  1 09:46:11 compute-0 nova_compute[189491]:        <nova:memory>128</nova:memory>
Dec  1 09:46:11 compute-0 nova_compute[189491]:        <nova:disk>1</nova:disk>
Dec  1 09:46:11 compute-0 nova_compute[189491]:        <nova:swap>0</nova:swap>
Dec  1 09:46:11 compute-0 nova_compute[189491]:        <nova:ephemeral>0</nova:ephemeral>
Dec  1 09:46:11 compute-0 nova_compute[189491]:        <nova:vcpus>1</nova:vcpus>
Dec  1 09:46:11 compute-0 nova_compute[189491]:      </nova:flavor>
Dec  1 09:46:11 compute-0 nova_compute[189491]:      <nova:owner>
Dec  1 09:46:11 compute-0 nova_compute[189491]:        <nova:user uuid="d64b3ffc20d34dd5af4018e4ea24dabd">tempest-ServerAddressesTestJSON-1747833979-project-member</nova:user>
Dec  1 09:46:11 compute-0 nova_compute[189491]:        <nova:project uuid="0bf37d9996bf440eb3bc55aa221d0ae6">tempest-ServerAddressesTestJSON-1747833979</nova:project>
Dec  1 09:46:11 compute-0 nova_compute[189491]:      </nova:owner>
Dec  1 09:46:11 compute-0 nova_compute[189491]:      <nova:root type="image" uuid="7ddeffd1-d06f-4a46-9e41-114974daa90e"/>
Dec  1 09:46:11 compute-0 nova_compute[189491]:      <nova:ports>
Dec  1 09:46:11 compute-0 nova_compute[189491]:        <nova:port uuid="993e74c8-435c-4af8-8267-003c237479c4">
Dec  1 09:46:11 compute-0 nova_compute[189491]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Dec  1 09:46:11 compute-0 nova_compute[189491]:        </nova:port>
Dec  1 09:46:11 compute-0 nova_compute[189491]:      </nova:ports>
Dec  1 09:46:11 compute-0 nova_compute[189491]:    </nova:instance>
Dec  1 09:46:11 compute-0 nova_compute[189491]:  </metadata>
Dec  1 09:46:11 compute-0 nova_compute[189491]:  <sysinfo type="smbios">
Dec  1 09:46:11 compute-0 nova_compute[189491]:    <system>
Dec  1 09:46:11 compute-0 nova_compute[189491]:      <entry name="manufacturer">RDO</entry>
Dec  1 09:46:11 compute-0 nova_compute[189491]:      <entry name="product">OpenStack Compute</entry>
Dec  1 09:46:11 compute-0 nova_compute[189491]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  1 09:46:11 compute-0 nova_compute[189491]:      <entry name="serial">4070cce8-ccf0-4909-8358-9924882ce843</entry>
Dec  1 09:46:11 compute-0 nova_compute[189491]:      <entry name="uuid">4070cce8-ccf0-4909-8358-9924882ce843</entry>
Dec  1 09:46:11 compute-0 nova_compute[189491]:      <entry name="family">Virtual Machine</entry>
Dec  1 09:46:11 compute-0 nova_compute[189491]:    </system>
Dec  1 09:46:11 compute-0 nova_compute[189491]:  </sysinfo>
Dec  1 09:46:11 compute-0 nova_compute[189491]:  <os>
Dec  1 09:46:11 compute-0 nova_compute[189491]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  1 09:46:11 compute-0 nova_compute[189491]:    <boot dev="hd"/>
Dec  1 09:46:11 compute-0 nova_compute[189491]:    <smbios mode="sysinfo"/>
Dec  1 09:46:11 compute-0 nova_compute[189491]:  </os>
Dec  1 09:46:11 compute-0 nova_compute[189491]:  <features>
Dec  1 09:46:11 compute-0 nova_compute[189491]:    <acpi/>
Dec  1 09:46:11 compute-0 nova_compute[189491]:    <apic/>
Dec  1 09:46:11 compute-0 nova_compute[189491]:    <vmcoreinfo/>
Dec  1 09:46:11 compute-0 nova_compute[189491]:  </features>
Dec  1 09:46:11 compute-0 nova_compute[189491]:  <clock offset="utc">
Dec  1 09:46:11 compute-0 nova_compute[189491]:    <timer name="pit" tickpolicy="delay"/>
Dec  1 09:46:11 compute-0 nova_compute[189491]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  1 09:46:11 compute-0 nova_compute[189491]:    <timer name="hpet" present="no"/>
Dec  1 09:46:11 compute-0 nova_compute[189491]:  </clock>
Dec  1 09:46:11 compute-0 nova_compute[189491]:  <cpu mode="host-model" match="exact">
Dec  1 09:46:11 compute-0 nova_compute[189491]:    <topology sockets="1" cores="1" threads="1"/>
Dec  1 09:46:11 compute-0 nova_compute[189491]:  </cpu>
Dec  1 09:46:11 compute-0 nova_compute[189491]:  <devices>
Dec  1 09:46:11 compute-0 nova_compute[189491]:    <disk type="file" device="disk">
Dec  1 09:46:11 compute-0 nova_compute[189491]:      <driver name="qemu" type="qcow2" cache="none"/>
Dec  1 09:46:11 compute-0 nova_compute[189491]:      <source file="/var/lib/nova/instances/4070cce8-ccf0-4909-8358-9924882ce843/disk"/>
Dec  1 09:46:11 compute-0 nova_compute[189491]:      <target dev="vda" bus="virtio"/>
Dec  1 09:46:11 compute-0 nova_compute[189491]:    </disk>
Dec  1 09:46:11 compute-0 nova_compute[189491]:    <disk type="file" device="cdrom">
Dec  1 09:46:11 compute-0 nova_compute[189491]:      <driver name="qemu" type="raw" cache="none"/>
Dec  1 09:46:11 compute-0 nova_compute[189491]:      <source file="/var/lib/nova/instances/4070cce8-ccf0-4909-8358-9924882ce843/disk.config"/>
Dec  1 09:46:11 compute-0 nova_compute[189491]:      <target dev="sda" bus="sata"/>
Dec  1 09:46:11 compute-0 nova_compute[189491]:    </disk>
Dec  1 09:46:11 compute-0 nova_compute[189491]:    <interface type="ethernet">
Dec  1 09:46:11 compute-0 nova_compute[189491]:      <mac address="fa:16:3e:7e:56:91"/>
Dec  1 09:46:11 compute-0 nova_compute[189491]:      <model type="virtio"/>
Dec  1 09:46:11 compute-0 nova_compute[189491]:      <driver name="vhost" rx_queue_size="512"/>
Dec  1 09:46:11 compute-0 nova_compute[189491]:      <mtu size="1442"/>
Dec  1 09:46:11 compute-0 nova_compute[189491]:      <target dev="tap993e74c8-43"/>
Dec  1 09:46:11 compute-0 nova_compute[189491]:    </interface>
Dec  1 09:46:11 compute-0 nova_compute[189491]:    <serial type="pty">
Dec  1 09:46:11 compute-0 nova_compute[189491]:      <log file="/var/lib/nova/instances/4070cce8-ccf0-4909-8358-9924882ce843/console.log" append="off"/>
Dec  1 09:46:11 compute-0 nova_compute[189491]:    </serial>
Dec  1 09:46:11 compute-0 nova_compute[189491]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  1 09:46:11 compute-0 nova_compute[189491]:    <video>
Dec  1 09:46:11 compute-0 nova_compute[189491]:      <model type="virtio"/>
Dec  1 09:46:11 compute-0 nova_compute[189491]:    </video>
Dec  1 09:46:11 compute-0 nova_compute[189491]:    <input type="tablet" bus="usb"/>
Dec  1 09:46:11 compute-0 nova_compute[189491]:    <rng model="virtio">
Dec  1 09:46:11 compute-0 nova_compute[189491]:      <backend model="random">/dev/urandom</backend>
Dec  1 09:46:11 compute-0 nova_compute[189491]:    </rng>
Dec  1 09:46:11 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root"/>
Dec  1 09:46:11 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:46:11 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:46:11 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:46:11 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:46:11 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:46:11 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:46:11 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:46:11 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:46:11 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:46:11 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:46:11 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:46:11 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:46:11 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:46:11 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:46:11 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:46:11 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:46:11 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:46:11 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:46:11 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:46:11 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:46:11 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:46:11 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:46:11 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:46:11 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:46:11 compute-0 nova_compute[189491]:    <controller type="usb" index="0"/>
Dec  1 09:46:11 compute-0 nova_compute[189491]:    <memballoon model="virtio">
Dec  1 09:46:11 compute-0 nova_compute[189491]:      <stats period="10"/>
Dec  1 09:46:11 compute-0 nova_compute[189491]:    </memballoon>
Dec  1 09:46:11 compute-0 nova_compute[189491]:  </devices>
Dec  1 09:46:11 compute-0 nova_compute[189491]: </domain>
Dec  1 09:46:11 compute-0 nova_compute[189491]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  1 09:46:11 compute-0 nova_compute[189491]: 2025-12-01 09:46:11.557 189495 DEBUG nova.compute.manager [None req-030132bd-af17-4c05-aea2-f41a5e1812b3 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] [instance: 4070cce8-ccf0-4909-8358-9924882ce843] Preparing to wait for external event network-vif-plugged-993e74c8-435c-4af8-8267-003c237479c4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  1 09:46:11 compute-0 nova_compute[189491]: 2025-12-01 09:46:11.557 189495 DEBUG oslo_concurrency.lockutils [None req-030132bd-af17-4c05-aea2-f41a5e1812b3 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] Acquiring lock "4070cce8-ccf0-4909-8358-9924882ce843-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:46:11 compute-0 nova_compute[189491]: 2025-12-01 09:46:11.557 189495 DEBUG oslo_concurrency.lockutils [None req-030132bd-af17-4c05-aea2-f41a5e1812b3 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] Lock "4070cce8-ccf0-4909-8358-9924882ce843-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:46:11 compute-0 nova_compute[189491]: 2025-12-01 09:46:11.558 189495 DEBUG oslo_concurrency.lockutils [None req-030132bd-af17-4c05-aea2-f41a5e1812b3 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] Lock "4070cce8-ccf0-4909-8358-9924882ce843-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:46:11 compute-0 nova_compute[189491]: 2025-12-01 09:46:11.558 189495 DEBUG nova.virt.libvirt.vif [None req-030132bd-af17-4c05-aea2-f41a5e1812b3 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-01T09:46:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-656807043',display_name='tempest-ServerAddressesTestJSON-server-656807043',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-656807043',id=14,image_ref='7ddeffd1-d06f-4a46-9e41-114974daa90e',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0bf37d9996bf440eb3bc55aa221d0ae6',ramdisk_id='',reservation_id='r-6vueod18',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7ddeffd1-d06f-4a46-9e41-114974daa90e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesTestJSON-1747833979',owner_user_name='tempest-ServerAddressesTestJSON-1747833979-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-01T09:46:09Z,user_data=None,user_id='d64b3ffc20d34dd5af4018e4ea24dabd',uuid=4070cce8-ccf0-4909-8358-9924882ce843,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "993e74c8-435c-4af8-8267-003c237479c4", "address": "fa:16:3e:7e:56:91", "network": {"id": "47f1cdb6-d949-499b-a4e6-73d3741aa9be", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-594371780-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bf37d9996bf440eb3bc55aa221d0ae6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap993e74c8-43", "ovs_interfaceid": "993e74c8-435c-4af8-8267-003c237479c4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  1 09:46:11 compute-0 nova_compute[189491]: 2025-12-01 09:46:11.559 189495 DEBUG nova.network.os_vif_util [None req-030132bd-af17-4c05-aea2-f41a5e1812b3 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] Converting VIF {"id": "993e74c8-435c-4af8-8267-003c237479c4", "address": "fa:16:3e:7e:56:91", "network": {"id": "47f1cdb6-d949-499b-a4e6-73d3741aa9be", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-594371780-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bf37d9996bf440eb3bc55aa221d0ae6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap993e74c8-43", "ovs_interfaceid": "993e74c8-435c-4af8-8267-003c237479c4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 09:46:11 compute-0 nova_compute[189491]: 2025-12-01 09:46:11.559 189495 DEBUG nova.network.os_vif_util [None req-030132bd-af17-4c05-aea2-f41a5e1812b3 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7e:56:91,bridge_name='br-int',has_traffic_filtering=True,id=993e74c8-435c-4af8-8267-003c237479c4,network=Network(47f1cdb6-d949-499b-a4e6-73d3741aa9be),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap993e74c8-43') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 09:46:11 compute-0 nova_compute[189491]: 2025-12-01 09:46:11.560 189495 DEBUG os_vif [None req-030132bd-af17-4c05-aea2-f41a5e1812b3 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7e:56:91,bridge_name='br-int',has_traffic_filtering=True,id=993e74c8-435c-4af8-8267-003c237479c4,network=Network(47f1cdb6-d949-499b-a4e6-73d3741aa9be),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap993e74c8-43') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  1 09:46:11 compute-0 nova_compute[189491]: 2025-12-01 09:46:11.560 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:46:11 compute-0 nova_compute[189491]: 2025-12-01 09:46:11.561 189495 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:46:11 compute-0 nova_compute[189491]: 2025-12-01 09:46:11.561 189495 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 09:46:11 compute-0 nova_compute[189491]: 2025-12-01 09:46:11.564 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:46:11 compute-0 nova_compute[189491]: 2025-12-01 09:46:11.564 189495 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap993e74c8-43, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:46:11 compute-0 nova_compute[189491]: 2025-12-01 09:46:11.565 189495 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap993e74c8-43, col_values=(('external_ids', {'iface-id': '993e74c8-435c-4af8-8267-003c237479c4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7e:56:91', 'vm-uuid': '4070cce8-ccf0-4909-8358-9924882ce843'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:46:11 compute-0 nova_compute[189491]: 2025-12-01 09:46:11.567 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:46:11 compute-0 nova_compute[189491]: 2025-12-01 09:46:11.570 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  1 09:46:11 compute-0 NetworkManager[56318]: <info>  [1764582371.5708] manager: (tap993e74c8-43): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/69)
Dec  1 09:46:11 compute-0 nova_compute[189491]: 2025-12-01 09:46:11.579 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:46:11 compute-0 nova_compute[189491]: 2025-12-01 09:46:11.580 189495 INFO os_vif [None req-030132bd-af17-4c05-aea2-f41a5e1812b3 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7e:56:91,bridge_name='br-int',has_traffic_filtering=True,id=993e74c8-435c-4af8-8267-003c237479c4,network=Network(47f1cdb6-d949-499b-a4e6-73d3741aa9be),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap993e74c8-43')#033[00m
Dec  1 09:46:11 compute-0 nova_compute[189491]: 2025-12-01 09:46:11.632 189495 DEBUG nova.virt.libvirt.driver [None req-030132bd-af17-4c05-aea2-f41a5e1812b3 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  1 09:46:11 compute-0 nova_compute[189491]: 2025-12-01 09:46:11.632 189495 DEBUG nova.virt.libvirt.driver [None req-030132bd-af17-4c05-aea2-f41a5e1812b3 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  1 09:46:11 compute-0 nova_compute[189491]: 2025-12-01 09:46:11.633 189495 DEBUG nova.virt.libvirt.driver [None req-030132bd-af17-4c05-aea2-f41a5e1812b3 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] No VIF found with MAC fa:16:3e:7e:56:91, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  1 09:46:11 compute-0 nova_compute[189491]: 2025-12-01 09:46:11.633 189495 INFO nova.virt.libvirt.driver [None req-030132bd-af17-4c05-aea2-f41a5e1812b3 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] [instance: 4070cce8-ccf0-4909-8358-9924882ce843] Using config drive#033[00m
Dec  1 09:46:13 compute-0 nova_compute[189491]: 2025-12-01 09:46:13.200 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:46:13 compute-0 nova_compute[189491]: 2025-12-01 09:46:13.848 189495 INFO nova.virt.libvirt.driver [None req-030132bd-af17-4c05-aea2-f41a5e1812b3 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] [instance: 4070cce8-ccf0-4909-8358-9924882ce843] Creating config drive at /var/lib/nova/instances/4070cce8-ccf0-4909-8358-9924882ce843/disk.config#033[00m
Dec  1 09:46:13 compute-0 nova_compute[189491]: 2025-12-01 09:46:13.854 189495 DEBUG oslo_concurrency.processutils [None req-030132bd-af17-4c05-aea2-f41a5e1812b3 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4070cce8-ccf0-4909-8358-9924882ce843/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpseuw50og execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:46:13 compute-0 nova_compute[189491]: 2025-12-01 09:46:13.985 189495 DEBUG oslo_concurrency.processutils [None req-030132bd-af17-4c05-aea2-f41a5e1812b3 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4070cce8-ccf0-4909-8358-9924882ce843/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpseuw50og" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:46:14 compute-0 nova_compute[189491]: 2025-12-01 09:46:14.051 189495 INFO nova.compute.manager [None req-2fd1cf95-172b-4e8d-976f-2a72cc946a00 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] [instance: 7535b6dd-3ef8-4847-812d-f0a9208df287] Get console output#033[00m
Dec  1 09:46:14 compute-0 kernel: tap993e74c8-43: entered promiscuous mode
Dec  1 09:46:14 compute-0 ovn_controller[97794]: 2025-12-01T09:46:14Z|00146|binding|INFO|Claiming lport 993e74c8-435c-4af8-8267-003c237479c4 for this chassis.
Dec  1 09:46:14 compute-0 ovn_controller[97794]: 2025-12-01T09:46:14Z|00147|binding|INFO|993e74c8-435c-4af8-8267-003c237479c4: Claiming fa:16:3e:7e:56:91 10.100.0.7
Dec  1 09:46:14 compute-0 nova_compute[189491]: 2025-12-01 09:46:14.069 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:46:14 compute-0 NetworkManager[56318]: <info>  [1764582374.0738] manager: (tap993e74c8-43): new Tun device (/org/freedesktop/NetworkManager/Devices/70)
Dec  1 09:46:14 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:46:14.080 106659 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7e:56:91 10.100.0.7'], port_security=['fa:16:3e:7e:56:91 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '4070cce8-ccf0-4909-8358-9924882ce843', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-47f1cdb6-d949-499b-a4e6-73d3741aa9be', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0bf37d9996bf440eb3bc55aa221d0ae6', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd0067e0c-2968-4584-a28a-73f098e0f433', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=35730c51-b958-4995-99c2-7808a72f37c4, chassis=[<ovs.db.idl.Row object at 0x7ff7f12c3670>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff7f12c3670>], logical_port=993e74c8-435c-4af8-8267-003c237479c4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 09:46:14 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:46:14.082 106659 INFO neutron.agent.ovn.metadata.agent [-] Port 993e74c8-435c-4af8-8267-003c237479c4 in datapath 47f1cdb6-d949-499b-a4e6-73d3741aa9be bound to our chassis#033[00m
Dec  1 09:46:14 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:46:14.088 106659 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 47f1cdb6-d949-499b-a4e6-73d3741aa9be#033[00m
Dec  1 09:46:14 compute-0 nova_compute[189491]: 2025-12-01 09:46:14.087 239700 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Dec  1 09:46:14 compute-0 ovn_controller[97794]: 2025-12-01T09:46:14Z|00148|binding|INFO|Setting lport 993e74c8-435c-4af8-8267-003c237479c4 ovn-installed in OVS
Dec  1 09:46:14 compute-0 ovn_controller[97794]: 2025-12-01T09:46:14Z|00149|binding|INFO|Setting lport 993e74c8-435c-4af8-8267-003c237479c4 up in Southbound
Dec  1 09:46:14 compute-0 nova_compute[189491]: 2025-12-01 09:46:14.106 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:46:14 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:46:14.117 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[d321d5fa-5909-4b86-a459-da6ff655bad3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:46:14 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:46:14.118 106659 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap47f1cdb6-d1 in ovnmeta-47f1cdb6-d949-499b-a4e6-73d3741aa9be namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  1 09:46:14 compute-0 systemd-udevd[254808]: Network interface NamePolicy= disabled on kernel command line.
Dec  1 09:46:14 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:46:14.126 239818 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap47f1cdb6-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  1 09:46:14 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:46:14.126 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[7da7104b-b557-4a8e-8d70-205359718525]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:46:14 compute-0 systemd-machined[155812]: New machine qemu-15-instance-0000000e.
Dec  1 09:46:14 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:46:14.130 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[162de382-57ef-4274-97a8-06938cef6d60]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:46:14 compute-0 systemd[1]: Started Virtual Machine qemu-15-instance-0000000e.
Dec  1 09:46:14 compute-0 NetworkManager[56318]: <info>  [1764582374.1531] device (tap993e74c8-43): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  1 09:46:14 compute-0 NetworkManager[56318]: <info>  [1764582374.1542] device (tap993e74c8-43): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  1 09:46:14 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:46:14.156 106797 DEBUG oslo.privsep.daemon [-] privsep: reply[be0ca653-13f7-423a-81b3-d8a905fd86e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:46:14 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:46:14.190 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[a774254c-90cb-4185-b2c6-254e0a889ad2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:46:14 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:46:14.237 239843 DEBUG oslo.privsep.daemon [-] privsep: reply[6394e7e2-1d0c-4725-8f3b-9c378eef8897]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:46:14 compute-0 NetworkManager[56318]: <info>  [1764582374.2478] manager: (tap47f1cdb6-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/71)
Dec  1 09:46:14 compute-0 systemd-udevd[254811]: Network interface NamePolicy= disabled on kernel command line.
Dec  1 09:46:14 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:46:14.250 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[1fc9ec0d-b8fb-4e46-a696-984e649aa4f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:46:14 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:46:14.299 239843 DEBUG oslo.privsep.daemon [-] privsep: reply[276e6948-9260-48e0-b1e1-d9c040d8ac79]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:46:14 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:46:14.304 239843 DEBUG oslo.privsep.daemon [-] privsep: reply[65e0b833-ff14-4189-8601-900b20dd0a13]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:46:14 compute-0 NetworkManager[56318]: <info>  [1764582374.3428] device (tap47f1cdb6-d0): carrier: link connected
Dec  1 09:46:14 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:46:14.349 239843 DEBUG oslo.privsep.daemon [-] privsep: reply[2e7a057d-8e0c-4461-9a84-d2b4488941b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:46:14 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:46:14.374 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[e10b154f-bcca-4d30-bed4-5fe05d5ea405]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap47f1cdb6-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4e:b0:f4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 43], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 561403, 'reachable_time': 30966, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 254854, 'error': None, 'target': 'ovnmeta-47f1cdb6-d949-499b-a4e6-73d3741aa9be', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:46:14 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:46:14.399 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[26fa9de3-bea3-4e67-babb-c5188768e686]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4e:b0f4'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 561403, 'tstamp': 561403}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 254858, 'error': None, 'target': 'ovnmeta-47f1cdb6-d949-499b-a4e6-73d3741aa9be', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:46:14 compute-0 podman[254829]: 2025-12-01 09:46:14.414834837 +0000 UTC m=+0.123093296 container health_status 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec  1 09:46:14 compute-0 nova_compute[189491]: 2025-12-01 09:46:14.425 189495 DEBUG oslo_concurrency.lockutils [None req-cdea49b0-f67d-43a3-ae8a-a4c13e251e8f 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Acquiring lock "7535b6dd-3ef8-4847-812d-f0a9208df287" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:46:14 compute-0 nova_compute[189491]: 2025-12-01 09:46:14.426 189495 DEBUG oslo_concurrency.lockutils [None req-cdea49b0-f67d-43a3-ae8a-a4c13e251e8f 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Lock "7535b6dd-3ef8-4847-812d-f0a9208df287" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:46:14 compute-0 nova_compute[189491]: 2025-12-01 09:46:14.426 189495 DEBUG oslo_concurrency.lockutils [None req-cdea49b0-f67d-43a3-ae8a-a4c13e251e8f 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Acquiring lock "7535b6dd-3ef8-4847-812d-f0a9208df287-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:46:14 compute-0 nova_compute[189491]: 2025-12-01 09:46:14.427 189495 DEBUG oslo_concurrency.lockutils [None req-cdea49b0-f67d-43a3-ae8a-a4c13e251e8f 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Lock "7535b6dd-3ef8-4847-812d-f0a9208df287-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:46:14 compute-0 nova_compute[189491]: 2025-12-01 09:46:14.427 189495 DEBUG oslo_concurrency.lockutils [None req-cdea49b0-f67d-43a3-ae8a-a4c13e251e8f 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Lock "7535b6dd-3ef8-4847-812d-f0a9208df287-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:46:14 compute-0 nova_compute[189491]: 2025-12-01 09:46:14.429 189495 INFO nova.compute.manager [None req-cdea49b0-f67d-43a3-ae8a-a4c13e251e8f 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] [instance: 7535b6dd-3ef8-4847-812d-f0a9208df287] Terminating instance#033[00m
Dec  1 09:46:14 compute-0 nova_compute[189491]: 2025-12-01 09:46:14.430 189495 DEBUG nova.compute.manager [None req-cdea49b0-f67d-43a3-ae8a-a4c13e251e8f 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] [instance: 7535b6dd-3ef8-4847-812d-f0a9208df287] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  1 09:46:14 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:46:14.431 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[aaad272e-4937-4a83-83c4-a8d8973e9204]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap47f1cdb6-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4e:b0:f4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 43], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 561403, 'reachable_time': 30966, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 254860, 'error': None, 'target': 'ovnmeta-47f1cdb6-d949-499b-a4e6-73d3741aa9be', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:46:14 compute-0 kernel: tap5f6c9141-b4 (unregistering): left promiscuous mode
Dec  1 09:46:14 compute-0 NetworkManager[56318]: <info>  [1764582374.4631] device (tap5f6c9141-b4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  1 09:46:14 compute-0 nova_compute[189491]: 2025-12-01 09:46:14.487 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:46:14 compute-0 ovn_controller[97794]: 2025-12-01T09:46:14Z|00150|binding|INFO|Releasing lport 5f6c9141-b437-4ca0-bceb-99a3d14bb457 from this chassis (sb_readonly=0)
Dec  1 09:46:14 compute-0 ovn_controller[97794]: 2025-12-01T09:46:14Z|00151|binding|INFO|Setting lport 5f6c9141-b437-4ca0-bceb-99a3d14bb457 down in Southbound
Dec  1 09:46:14 compute-0 ovn_controller[97794]: 2025-12-01T09:46:14Z|00152|binding|INFO|Removing iface tap5f6c9141-b4 ovn-installed in OVS
Dec  1 09:46:14 compute-0 nova_compute[189491]: 2025-12-01 09:46:14.495 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:46:14 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:46:14.499 106659 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8c:34:1f 10.100.0.6'], port_security=['fa:16:3e:8c:34:1f 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '7535b6dd-3ef8-4847-812d-f0a9208df287', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4f3e9b63-cba6-412e-ba07-d66a8b38af02', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ee60ff0d117e468aa42c7d39022568ea', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8632ed1a-81ae-4d44-8a48-0770ed769e4c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.249'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=45465482-a276-408a-8d6b-656a92e66817, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff7f12c3670>], logical_port=5f6c9141-b437-4ca0-bceb-99a3d14bb457) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff7f12c3670>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 09:46:14 compute-0 nova_compute[189491]: 2025-12-01 09:46:14.505 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:46:14 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:46:14.505 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[c13ae11a-56e2-4fdc-924b-c2388327de58]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:46:14 compute-0 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000000c.scope: Deactivated successfully.
Dec  1 09:46:14 compute-0 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000000c.scope: Consumed 37.543s CPU time.
Dec  1 09:46:14 compute-0 systemd-machined[155812]: Machine qemu-13-instance-0000000c terminated.
Dec  1 09:46:14 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:46:14.595 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[85225564-f498-4d18-a283-3cba08a1800b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:46:14 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:46:14.597 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap47f1cdb6-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:46:14 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:46:14.597 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 09:46:14 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:46:14.598 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap47f1cdb6-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:46:14 compute-0 NetworkManager[56318]: <info>  [1764582374.6010] manager: (tap47f1cdb6-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/72)
Dec  1 09:46:14 compute-0 nova_compute[189491]: 2025-12-01 09:46:14.601 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:46:14 compute-0 podman[254859]: 2025-12-01 09:46:14.602693134 +0000 UTC m=+0.192234065 container health_status 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  1 09:46:14 compute-0 kernel: tap47f1cdb6-d0: entered promiscuous mode
Dec  1 09:46:14 compute-0 nova_compute[189491]: 2025-12-01 09:46:14.613 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:46:14 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:46:14.615 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap47f1cdb6-d0, col_values=(('external_ids', {'iface-id': 'a29e4c11-0731-4838-aa26-f7b767b6bc69'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:46:14 compute-0 nova_compute[189491]: 2025-12-01 09:46:14.616 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:46:14 compute-0 ovn_controller[97794]: 2025-12-01T09:46:14Z|00153|binding|INFO|Releasing lport a29e4c11-0731-4838-aa26-f7b767b6bc69 from this chassis (sb_readonly=0)
Dec  1 09:46:14 compute-0 nova_compute[189491]: 2025-12-01 09:46:14.636 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:46:14 compute-0 nova_compute[189491]: 2025-12-01 09:46:14.648 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:46:14 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:46:14.651 106659 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/47f1cdb6-d949-499b-a4e6-73d3741aa9be.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/47f1cdb6-d949-499b-a4e6-73d3741aa9be.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  1 09:46:14 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:46:14.654 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[7a0f5928-937b-4a07-bbea-b564d28fa54b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:46:14 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:46:14.658 106659 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  1 09:46:14 compute-0 ovn_metadata_agent[106654]: global
Dec  1 09:46:14 compute-0 ovn_metadata_agent[106654]:    log         /dev/log local0 debug
Dec  1 09:46:14 compute-0 ovn_metadata_agent[106654]:    log-tag     haproxy-metadata-proxy-47f1cdb6-d949-499b-a4e6-73d3741aa9be
Dec  1 09:46:14 compute-0 ovn_metadata_agent[106654]:    user        root
Dec  1 09:46:14 compute-0 ovn_metadata_agent[106654]:    group       root
Dec  1 09:46:14 compute-0 ovn_metadata_agent[106654]:    maxconn     1024
Dec  1 09:46:14 compute-0 ovn_metadata_agent[106654]:    pidfile     /var/lib/neutron/external/pids/47f1cdb6-d949-499b-a4e6-73d3741aa9be.pid.haproxy
Dec  1 09:46:14 compute-0 ovn_metadata_agent[106654]:    daemon
Dec  1 09:46:14 compute-0 ovn_metadata_agent[106654]: 
Dec  1 09:46:14 compute-0 ovn_metadata_agent[106654]: defaults
Dec  1 09:46:14 compute-0 ovn_metadata_agent[106654]:    log global
Dec  1 09:46:14 compute-0 ovn_metadata_agent[106654]:    mode http
Dec  1 09:46:14 compute-0 ovn_metadata_agent[106654]:    option httplog
Dec  1 09:46:14 compute-0 ovn_metadata_agent[106654]:    option dontlognull
Dec  1 09:46:14 compute-0 ovn_metadata_agent[106654]:    option http-server-close
Dec  1 09:46:14 compute-0 ovn_metadata_agent[106654]:    option forwardfor
Dec  1 09:46:14 compute-0 ovn_metadata_agent[106654]:    retries                 3
Dec  1 09:46:14 compute-0 ovn_metadata_agent[106654]:    timeout http-request    30s
Dec  1 09:46:14 compute-0 ovn_metadata_agent[106654]:    timeout connect         30s
Dec  1 09:46:14 compute-0 ovn_metadata_agent[106654]:    timeout client          32s
Dec  1 09:46:14 compute-0 ovn_metadata_agent[106654]:    timeout server          32s
Dec  1 09:46:14 compute-0 ovn_metadata_agent[106654]:    timeout http-keep-alive 30s
Dec  1 09:46:14 compute-0 ovn_metadata_agent[106654]: 
Dec  1 09:46:14 compute-0 ovn_metadata_agent[106654]: 
Dec  1 09:46:14 compute-0 ovn_metadata_agent[106654]: listen listener
Dec  1 09:46:14 compute-0 ovn_metadata_agent[106654]:    bind 169.254.169.254:80
Dec  1 09:46:14 compute-0 ovn_metadata_agent[106654]:    server metadata /var/lib/neutron/metadata_proxy
Dec  1 09:46:14 compute-0 ovn_metadata_agent[106654]:    http-request add-header X-OVN-Network-ID 47f1cdb6-d949-499b-a4e6-73d3741aa9be
Dec  1 09:46:14 compute-0 ovn_metadata_agent[106654]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  1 09:46:14 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:46:14.661 106659 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-47f1cdb6-d949-499b-a4e6-73d3741aa9be', 'env', 'PROCESS_TAG=haproxy-47f1cdb6-d949-499b-a4e6-73d3741aa9be', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/47f1cdb6-d949-499b-a4e6-73d3741aa9be.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  1 09:46:14 compute-0 NetworkManager[56318]: <info>  [1764582374.6709] manager: (tap5f6c9141-b4): new Tun device (/org/freedesktop/NetworkManager/Devices/73)
Dec  1 09:46:14 compute-0 kernel: tap5f6c9141-b4: entered promiscuous mode
Dec  1 09:46:14 compute-0 kernel: tap5f6c9141-b4 (unregistering): left promiscuous mode
Dec  1 09:46:14 compute-0 nova_compute[189491]: 2025-12-01 09:46:14.684 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:46:14 compute-0 ovn_controller[97794]: 2025-12-01T09:46:14Z|00154|binding|INFO|Claiming lport 5f6c9141-b437-4ca0-bceb-99a3d14bb457 for this chassis.
Dec  1 09:46:14 compute-0 ovn_controller[97794]: 2025-12-01T09:46:14Z|00155|binding|INFO|5f6c9141-b437-4ca0-bceb-99a3d14bb457: Claiming fa:16:3e:8c:34:1f 10.100.0.6
Dec  1 09:46:14 compute-0 nova_compute[189491]: 2025-12-01 09:46:14.708 189495 DEBUG nova.virt.driver [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] Emitting event <LifecycleEvent: 1764582374.707471, 4070cce8-ccf0-4909-8358-9924882ce843 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 09:46:14 compute-0 nova_compute[189491]: 2025-12-01 09:46:14.708 189495 INFO nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: 4070cce8-ccf0-4909-8358-9924882ce843] VM Started (Lifecycle Event)#033[00m
Dec  1 09:46:14 compute-0 ovn_controller[97794]: 2025-12-01T09:46:14Z|00156|binding|INFO|Setting lport 5f6c9141-b437-4ca0-bceb-99a3d14bb457 ovn-installed in OVS
Dec  1 09:46:14 compute-0 ovn_controller[97794]: 2025-12-01T09:46:14Z|00157|if_status|INFO|Not setting lport 5f6c9141-b437-4ca0-bceb-99a3d14bb457 down as sb is readonly
Dec  1 09:46:14 compute-0 nova_compute[189491]: 2025-12-01 09:46:14.715 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:46:14 compute-0 nova_compute[189491]: 2025-12-01 09:46:14.718 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:46:14 compute-0 nova_compute[189491]: 2025-12-01 09:46:14.741 189495 INFO nova.virt.libvirt.driver [-] [instance: 7535b6dd-3ef8-4847-812d-f0a9208df287] Instance destroyed successfully.#033[00m
Dec  1 09:46:14 compute-0 nova_compute[189491]: 2025-12-01 09:46:14.741 189495 DEBUG nova.objects.instance [None req-cdea49b0-f67d-43a3-ae8a-a4c13e251e8f 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Lazy-loading 'resources' on Instance uuid 7535b6dd-3ef8-4847-812d-f0a9208df287 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 09:46:14 compute-0 ovn_controller[97794]: 2025-12-01T09:46:14Z|00158|binding|INFO|Releasing lport 5f6c9141-b437-4ca0-bceb-99a3d14bb457 from this chassis (sb_readonly=0)
Dec  1 09:46:14 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:46:14.756 106659 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8c:34:1f 10.100.0.6'], port_security=['fa:16:3e:8c:34:1f 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '7535b6dd-3ef8-4847-812d-f0a9208df287', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4f3e9b63-cba6-412e-ba07-d66a8b38af02', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ee60ff0d117e468aa42c7d39022568ea', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8632ed1a-81ae-4d44-8a48-0770ed769e4c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.249'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=45465482-a276-408a-8d6b-656a92e66817, chassis=[<ovs.db.idl.Row object at 0x7ff7f12c3670>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff7f12c3670>], logical_port=5f6c9141-b437-4ca0-bceb-99a3d14bb457) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 09:46:14 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:46:14.763 106659 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8c:34:1f 10.100.0.6'], port_security=['fa:16:3e:8c:34:1f 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '7535b6dd-3ef8-4847-812d-f0a9208df287', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4f3e9b63-cba6-412e-ba07-d66a8b38af02', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ee60ff0d117e468aa42c7d39022568ea', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8632ed1a-81ae-4d44-8a48-0770ed769e4c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.249'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=45465482-a276-408a-8d6b-656a92e66817, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff7f12c3670>], logical_port=5f6c9141-b437-4ca0-bceb-99a3d14bb457) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7ff7f12c3670>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 09:46:14 compute-0 nova_compute[189491]: 2025-12-01 09:46:14.769 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:46:14 compute-0 nova_compute[189491]: 2025-12-01 09:46:14.783 189495 DEBUG nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: 4070cce8-ccf0-4909-8358-9924882ce843] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 09:46:14 compute-0 nova_compute[189491]: 2025-12-01 09:46:14.784 189495 DEBUG nova.virt.libvirt.vif [None req-cdea49b0-f67d-43a3-ae8a-a4c13e251e8f 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-01T09:45:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1346121752',display_name='tempest-TestNetworkBasicOps-server-1346121752',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1346121752',id=12,image_ref='7ddeffd1-d06f-4a46-9e41-114974daa90e',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDzY2QooqBKCgrVmFm9t9G6kUoxRR7Z58hf2jxLG81LTp7tA7B5s3qGHwrOLAvUIw9FkUrXmSb+JOXMns7AV8is1dyQKTdDiNnfExt9nI0JCJ7U4FIFbUzsyCbyBdqeGug==',key_name='tempest-TestNetworkBasicOps-871464086',keypairs=<?>,launch_index=0,launched_at=2025-12-01T09:45:30Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ee60ff0d117e468aa42c7d39022568ea',ramdisk_id='',reservation_id='r-mcvw58o0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7ddeffd1-d06f-4a46-9e41-114974daa90e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-291434657',owner_user_name='tempest-TestNetworkBasicOps-291434657-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-01T09:45:30Z,user_data=None,user_id='3f19699d7cb4493292a31daef496a1c2',uuid=7535b6dd-3ef8-4847-812d-f0a9208df287,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5f6c9141-b437-4ca0-bceb-99a3d14bb457", "address": "fa:16:3e:8c:34:1f", "network": {"id": "4f3e9b63-cba6-412e-ba07-d66a8b38af02", "bridge": "br-int", "label": "tempest-network-smoke--1085714181", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee60ff0d117e468aa42c7d39022568ea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5f6c9141-b4", "ovs_interfaceid": "5f6c9141-b437-4ca0-bceb-99a3d14bb457", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  1 09:46:14 compute-0 nova_compute[189491]: 2025-12-01 09:46:14.784 189495 DEBUG nova.network.os_vif_util [None req-cdea49b0-f67d-43a3-ae8a-a4c13e251e8f 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Converting VIF {"id": "5f6c9141-b437-4ca0-bceb-99a3d14bb457", "address": "fa:16:3e:8c:34:1f", "network": {"id": "4f3e9b63-cba6-412e-ba07-d66a8b38af02", "bridge": "br-int", "label": "tempest-network-smoke--1085714181", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee60ff0d117e468aa42c7d39022568ea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5f6c9141-b4", "ovs_interfaceid": "5f6c9141-b437-4ca0-bceb-99a3d14bb457", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 09:46:14 compute-0 nova_compute[189491]: 2025-12-01 09:46:14.785 189495 DEBUG nova.network.os_vif_util [None req-cdea49b0-f67d-43a3-ae8a-a4c13e251e8f 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:8c:34:1f,bridge_name='br-int',has_traffic_filtering=True,id=5f6c9141-b437-4ca0-bceb-99a3d14bb457,network=Network(4f3e9b63-cba6-412e-ba07-d66a8b38af02),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5f6c9141-b4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 09:46:14 compute-0 nova_compute[189491]: 2025-12-01 09:46:14.786 189495 DEBUG os_vif [None req-cdea49b0-f67d-43a3-ae8a-a4c13e251e8f 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:8c:34:1f,bridge_name='br-int',has_traffic_filtering=True,id=5f6c9141-b437-4ca0-bceb-99a3d14bb457,network=Network(4f3e9b63-cba6-412e-ba07-d66a8b38af02),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5f6c9141-b4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  1 09:46:14 compute-0 nova_compute[189491]: 2025-12-01 09:46:14.800 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:46:14 compute-0 nova_compute[189491]: 2025-12-01 09:46:14.800 189495 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5f6c9141-b4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:46:14 compute-0 nova_compute[189491]: 2025-12-01 09:46:14.802 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:46:14 compute-0 nova_compute[189491]: 2025-12-01 09:46:14.804 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:46:14 compute-0 nova_compute[189491]: 2025-12-01 09:46:14.816 189495 INFO os_vif [None req-cdea49b0-f67d-43a3-ae8a-a4c13e251e8f 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:8c:34:1f,bridge_name='br-int',has_traffic_filtering=True,id=5f6c9141-b437-4ca0-bceb-99a3d14bb457,network=Network(4f3e9b63-cba6-412e-ba07-d66a8b38af02),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5f6c9141-b4')#033[00m
Dec  1 09:46:14 compute-0 nova_compute[189491]: 2025-12-01 09:46:14.817 189495 INFO nova.virt.libvirt.driver [None req-cdea49b0-f67d-43a3-ae8a-a4c13e251e8f 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] [instance: 7535b6dd-3ef8-4847-812d-f0a9208df287] Deleting instance files /var/lib/nova/instances/7535b6dd-3ef8-4847-812d-f0a9208df287_del#033[00m
Dec  1 09:46:14 compute-0 nova_compute[189491]: 2025-12-01 09:46:14.817 189495 INFO nova.virt.libvirt.driver [None req-cdea49b0-f67d-43a3-ae8a-a4c13e251e8f 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] [instance: 7535b6dd-3ef8-4847-812d-f0a9208df287] Deletion of /var/lib/nova/instances/7535b6dd-3ef8-4847-812d-f0a9208df287_del complete#033[00m
Dec  1 09:46:14 compute-0 nova_compute[189491]: 2025-12-01 09:46:14.824 189495 DEBUG nova.virt.driver [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] Emitting event <LifecycleEvent: 1764582374.7107737, 4070cce8-ccf0-4909-8358-9924882ce843 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 09:46:14 compute-0 nova_compute[189491]: 2025-12-01 09:46:14.825 189495 INFO nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: 4070cce8-ccf0-4909-8358-9924882ce843] VM Paused (Lifecycle Event)#033[00m
Dec  1 09:46:14 compute-0 nova_compute[189491]: 2025-12-01 09:46:14.873 189495 DEBUG nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: 4070cce8-ccf0-4909-8358-9924882ce843] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 09:46:14 compute-0 nova_compute[189491]: 2025-12-01 09:46:14.881 189495 DEBUG nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: 4070cce8-ccf0-4909-8358-9924882ce843] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  1 09:46:14 compute-0 nova_compute[189491]: 2025-12-01 09:46:14.903 189495 INFO nova.compute.manager [None req-cdea49b0-f67d-43a3-ae8a-a4c13e251e8f 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] [instance: 7535b6dd-3ef8-4847-812d-f0a9208df287] Took 0.47 seconds to destroy the instance on the hypervisor.#033[00m
Dec  1 09:46:14 compute-0 nova_compute[189491]: 2025-12-01 09:46:14.904 189495 DEBUG oslo.service.loopingcall [None req-cdea49b0-f67d-43a3-ae8a-a4c13e251e8f 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  1 09:46:14 compute-0 nova_compute[189491]: 2025-12-01 09:46:14.904 189495 DEBUG nova.compute.manager [-] [instance: 7535b6dd-3ef8-4847-812d-f0a9208df287] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  1 09:46:14 compute-0 nova_compute[189491]: 2025-12-01 09:46:14.904 189495 DEBUG nova.network.neutron [-] [instance: 7535b6dd-3ef8-4847-812d-f0a9208df287] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  1 09:46:14 compute-0 nova_compute[189491]: 2025-12-01 09:46:14.909 189495 INFO nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: 4070cce8-ccf0-4909-8358-9924882ce843] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  1 09:46:15 compute-0 nova_compute[189491]: 2025-12-01 09:46:15.104 189495 DEBUG nova.compute.manager [req-bc1e3a42-5907-4c48-b0f8-911e3e183eb2 req-16d68a5e-1eff-4789-b5fb-765eee33abb7 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 4070cce8-ccf0-4909-8358-9924882ce843] Received event network-vif-plugged-993e74c8-435c-4af8-8267-003c237479c4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 09:46:15 compute-0 nova_compute[189491]: 2025-12-01 09:46:15.105 189495 DEBUG oslo_concurrency.lockutils [req-bc1e3a42-5907-4c48-b0f8-911e3e183eb2 req-16d68a5e-1eff-4789-b5fb-765eee33abb7 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Acquiring lock "4070cce8-ccf0-4909-8358-9924882ce843-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:46:15 compute-0 nova_compute[189491]: 2025-12-01 09:46:15.106 189495 DEBUG oslo_concurrency.lockutils [req-bc1e3a42-5907-4c48-b0f8-911e3e183eb2 req-16d68a5e-1eff-4789-b5fb-765eee33abb7 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Lock "4070cce8-ccf0-4909-8358-9924882ce843-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:46:15 compute-0 nova_compute[189491]: 2025-12-01 09:46:15.106 189495 DEBUG oslo_concurrency.lockutils [req-bc1e3a42-5907-4c48-b0f8-911e3e183eb2 req-16d68a5e-1eff-4789-b5fb-765eee33abb7 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Lock "4070cce8-ccf0-4909-8358-9924882ce843-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:46:15 compute-0 nova_compute[189491]: 2025-12-01 09:46:15.107 189495 DEBUG nova.compute.manager [req-bc1e3a42-5907-4c48-b0f8-911e3e183eb2 req-16d68a5e-1eff-4789-b5fb-765eee33abb7 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 4070cce8-ccf0-4909-8358-9924882ce843] Processing event network-vif-plugged-993e74c8-435c-4af8-8267-003c237479c4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  1 09:46:15 compute-0 nova_compute[189491]: 2025-12-01 09:46:15.108 189495 DEBUG nova.compute.manager [None req-030132bd-af17-4c05-aea2-f41a5e1812b3 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] [instance: 4070cce8-ccf0-4909-8358-9924882ce843] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  1 09:46:15 compute-0 nova_compute[189491]: 2025-12-01 09:46:15.114 189495 DEBUG nova.virt.driver [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] Emitting event <LifecycleEvent: 1764582375.1142159, 4070cce8-ccf0-4909-8358-9924882ce843 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 09:46:15 compute-0 nova_compute[189491]: 2025-12-01 09:46:15.115 189495 INFO nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: 4070cce8-ccf0-4909-8358-9924882ce843] VM Resumed (Lifecycle Event)#033[00m
Dec  1 09:46:15 compute-0 nova_compute[189491]: 2025-12-01 09:46:15.118 189495 DEBUG nova.virt.libvirt.driver [None req-030132bd-af17-4c05-aea2-f41a5e1812b3 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] [instance: 4070cce8-ccf0-4909-8358-9924882ce843] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  1 09:46:15 compute-0 nova_compute[189491]: 2025-12-01 09:46:15.122 189495 INFO nova.virt.libvirt.driver [-] [instance: 4070cce8-ccf0-4909-8358-9924882ce843] Instance spawned successfully.#033[00m
Dec  1 09:46:15 compute-0 nova_compute[189491]: 2025-12-01 09:46:15.123 189495 DEBUG nova.virt.libvirt.driver [None req-030132bd-af17-4c05-aea2-f41a5e1812b3 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] [instance: 4070cce8-ccf0-4909-8358-9924882ce843] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  1 09:46:15 compute-0 nova_compute[189491]: 2025-12-01 09:46:15.146 189495 DEBUG nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: 4070cce8-ccf0-4909-8358-9924882ce843] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 09:46:15 compute-0 nova_compute[189491]: 2025-12-01 09:46:15.155 189495 DEBUG nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: 4070cce8-ccf0-4909-8358-9924882ce843] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  1 09:46:15 compute-0 nova_compute[189491]: 2025-12-01 09:46:15.159 189495 DEBUG nova.virt.libvirt.driver [None req-030132bd-af17-4c05-aea2-f41a5e1812b3 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] [instance: 4070cce8-ccf0-4909-8358-9924882ce843] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 09:46:15 compute-0 nova_compute[189491]: 2025-12-01 09:46:15.159 189495 DEBUG nova.virt.libvirt.driver [None req-030132bd-af17-4c05-aea2-f41a5e1812b3 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] [instance: 4070cce8-ccf0-4909-8358-9924882ce843] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 09:46:15 compute-0 nova_compute[189491]: 2025-12-01 09:46:15.161 189495 DEBUG nova.virt.libvirt.driver [None req-030132bd-af17-4c05-aea2-f41a5e1812b3 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] [instance: 4070cce8-ccf0-4909-8358-9924882ce843] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 09:46:15 compute-0 nova_compute[189491]: 2025-12-01 09:46:15.161 189495 DEBUG nova.virt.libvirt.driver [None req-030132bd-af17-4c05-aea2-f41a5e1812b3 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] [instance: 4070cce8-ccf0-4909-8358-9924882ce843] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 09:46:15 compute-0 nova_compute[189491]: 2025-12-01 09:46:15.162 189495 DEBUG nova.virt.libvirt.driver [None req-030132bd-af17-4c05-aea2-f41a5e1812b3 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] [instance: 4070cce8-ccf0-4909-8358-9924882ce843] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 09:46:15 compute-0 nova_compute[189491]: 2025-12-01 09:46:15.163 189495 DEBUG nova.virt.libvirt.driver [None req-030132bd-af17-4c05-aea2-f41a5e1812b3 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] [instance: 4070cce8-ccf0-4909-8358-9924882ce843] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 09:46:15 compute-0 nova_compute[189491]: 2025-12-01 09:46:15.187 189495 INFO nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: 4070cce8-ccf0-4909-8358-9924882ce843] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  1 09:46:15 compute-0 podman[254938]: 2025-12-01 09:46:15.201899766 +0000 UTC m=+0.081232904 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  1 09:46:15 compute-0 nova_compute[189491]: 2025-12-01 09:46:15.309 189495 INFO nova.compute.manager [None req-030132bd-af17-4c05-aea2-f41a5e1812b3 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] [instance: 4070cce8-ccf0-4909-8358-9924882ce843] Took 6.09 seconds to spawn the instance on the hypervisor.#033[00m
Dec  1 09:46:15 compute-0 nova_compute[189491]: 2025-12-01 09:46:15.310 189495 DEBUG nova.compute.manager [None req-030132bd-af17-4c05-aea2-f41a5e1812b3 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] [instance: 4070cce8-ccf0-4909-8358-9924882ce843] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 09:46:15 compute-0 podman[254938]: 2025-12-01 09:46:15.331584432 +0000 UTC m=+0.210917540 container create 8e689a7231d87ca9564c5223ac259c551750307109b8b3d6999edf4aba3159aa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-47f1cdb6-d949-499b-a4e6-73d3741aa9be, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125)
Dec  1 09:46:15 compute-0 nova_compute[189491]: 2025-12-01 09:46:15.378 189495 INFO nova.compute.manager [None req-030132bd-af17-4c05-aea2-f41a5e1812b3 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] [instance: 4070cce8-ccf0-4909-8358-9924882ce843] Took 7.17 seconds to build instance.#033[00m
Dec  1 09:46:15 compute-0 systemd[1]: Started libpod-conmon-8e689a7231d87ca9564c5223ac259c551750307109b8b3d6999edf4aba3159aa.scope.
Dec  1 09:46:15 compute-0 nova_compute[189491]: 2025-12-01 09:46:15.396 189495 DEBUG oslo_concurrency.lockutils [None req-030132bd-af17-4c05-aea2-f41a5e1812b3 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] Lock "4070cce8-ccf0-4909-8358-9924882ce843" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.248s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:46:15 compute-0 systemd[1]: Started libcrun container.
Dec  1 09:46:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf39678119052d5766042772b0b79c5b92284461163701f4946f2ff80546f257/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  1 09:46:15 compute-0 podman[254938]: 2025-12-01 09:46:15.460168082 +0000 UTC m=+0.339501210 container init 8e689a7231d87ca9564c5223ac259c551750307109b8b3d6999edf4aba3159aa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-47f1cdb6-d949-499b-a4e6-73d3741aa9be, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 09:46:15 compute-0 podman[254938]: 2025-12-01 09:46:15.468747142 +0000 UTC m=+0.348080250 container start 8e689a7231d87ca9564c5223ac259c551750307109b8b3d6999edf4aba3159aa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-47f1cdb6-d949-499b-a4e6-73d3741aa9be, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0)
Dec  1 09:46:15 compute-0 neutron-haproxy-ovnmeta-47f1cdb6-d949-499b-a4e6-73d3741aa9be[254952]: [NOTICE]   (254956) : New worker (254958) forked
Dec  1 09:46:15 compute-0 neutron-haproxy-ovnmeta-47f1cdb6-d949-499b-a4e6-73d3741aa9be[254952]: [NOTICE]   (254956) : Loading success.
Dec  1 09:46:15 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:46:15.532 106659 INFO neutron.agent.ovn.metadata.agent [-] Port 5f6c9141-b437-4ca0-bceb-99a3d14bb457 in datapath 4f3e9b63-cba6-412e-ba07-d66a8b38af02 unbound from our chassis#033[00m
Dec  1 09:46:15 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:46:15.536 106659 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4f3e9b63-cba6-412e-ba07-d66a8b38af02#033[00m
Dec  1 09:46:15 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:46:15.555 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[811d93b7-da6d-4f63-b519-c32ca5a4fc65]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:46:15 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:46:15.603 239843 DEBUG oslo.privsep.daemon [-] privsep: reply[ad797099-f5b3-4aa0-9396-083ebfbd1eaf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:46:15 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:46:15.609 239843 DEBUG oslo.privsep.daemon [-] privsep: reply[4249014c-2bc1-4600-81ec-4d0aaaf443b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:46:15 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:46:15.643 239843 DEBUG oslo.privsep.daemon [-] privsep: reply[9c7d6e9a-63d5-4b94-ae3b-6ec455d896e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:46:15 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:46:15.672 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[2bc5589d-b4d9-49ac-a350-031b0ed67603]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4f3e9b63-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:66:a3:d6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 550202, 'reachable_time': 33319, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 254972, 'error': None, 'target': 'ovnmeta-4f3e9b63-cba6-412e-ba07-d66a8b38af02', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:46:15 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:46:15.692 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[b9cbfc13-486e-4e12-bcb7-a18fe5af60a7]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap4f3e9b63-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 550218, 'tstamp': 550218}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 254973, 'error': None, 'target': 'ovnmeta-4f3e9b63-cba6-412e-ba07-d66a8b38af02', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4f3e9b63-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 550221, 'tstamp': 550221}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 254973, 'error': None, 'target': 'ovnmeta-4f3e9b63-cba6-412e-ba07-d66a8b38af02', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:46:15 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:46:15.695 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4f3e9b63-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:46:15 compute-0 nova_compute[189491]: 2025-12-01 09:46:15.697 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:46:15 compute-0 nova_compute[189491]: 2025-12-01 09:46:15.698 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:46:15 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:46:15.700 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4f3e9b63-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:46:15 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:46:15.701 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 09:46:15 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:46:15.701 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4f3e9b63-c0, col_values=(('external_ids', {'iface-id': 'a52d5841-c07f-4d57-abbb-5b84c6008243'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:46:15 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:46:15.702 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 09:46:15 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:46:15.703 106659 INFO neutron.agent.ovn.metadata.agent [-] Port 5f6c9141-b437-4ca0-bceb-99a3d14bb457 in datapath 4f3e9b63-cba6-412e-ba07-d66a8b38af02 unbound from our chassis#033[00m
Dec  1 09:46:15 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:46:15.705 106659 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4f3e9b63-cba6-412e-ba07-d66a8b38af02#033[00m
Dec  1 09:46:15 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:46:15.728 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[c94b9fae-cd46-429b-884d-dab5ed3cee33]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:46:15 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:46:15.767 239843 DEBUG oslo.privsep.daemon [-] privsep: reply[e7b2e1a5-7cec-40c9-8d3a-79a3925248a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:46:15 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:46:15.772 239843 DEBUG oslo.privsep.daemon [-] privsep: reply[808f4a85-d2d1-4cd8-8633-a4cf27f3d149]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:46:15 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:46:15.819 239843 DEBUG oslo.privsep.daemon [-] privsep: reply[29741254-f426-4609-b194-2daa09926f06]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:46:15 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:46:15.845 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[98e25f00-ce6b-471f-9e92-e3b32ffd5ccd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4f3e9b63-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:66:a3:d6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 700, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 700, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 550202, 'reachable_time': 33319, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 254979, 'error': None, 'target': 'ovnmeta-4f3e9b63-cba6-412e-ba07-d66a8b38af02', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:46:15 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:46:15.875 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[0a3c7df3-0a89-455c-b797-8ff35e36c8e1]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap4f3e9b63-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 550218, 'tstamp': 550218}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 254980, 'error': None, 'target': 'ovnmeta-4f3e9b63-cba6-412e-ba07-d66a8b38af02', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4f3e9b63-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 550221, 'tstamp': 550221}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 254980, 'error': None, 'target': 'ovnmeta-4f3e9b63-cba6-412e-ba07-d66a8b38af02', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:46:15 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:46:15.879 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4f3e9b63-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:46:15 compute-0 nova_compute[189491]: 2025-12-01 09:46:15.882 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:46:15 compute-0 nova_compute[189491]: 2025-12-01 09:46:15.884 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:46:15 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:46:15.888 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4f3e9b63-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:46:15 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:46:15.889 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 09:46:15 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:46:15.890 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4f3e9b63-c0, col_values=(('external_ids', {'iface-id': 'a52d5841-c07f-4d57-abbb-5b84c6008243'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:46:15 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:46:15.892 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 09:46:15 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:46:15.894 106659 INFO neutron.agent.ovn.metadata.agent [-] Port 5f6c9141-b437-4ca0-bceb-99a3d14bb457 in datapath 4f3e9b63-cba6-412e-ba07-d66a8b38af02 unbound from our chassis#033[00m
Dec  1 09:46:15 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:46:15.897 106659 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4f3e9b63-cba6-412e-ba07-d66a8b38af02#033[00m
Dec  1 09:46:15 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:46:15.914 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[248274a1-a8c9-49cb-8cbf-37f5a6a59d8a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:46:15 compute-0 nova_compute[189491]: 2025-12-01 09:46:15.920 189495 DEBUG nova.network.neutron [req-d0263f68-242f-4a97-bf12-767bddb05e0f req-8657e4b4-8c7c-4a24-8ed3-2b15822314f9 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 4070cce8-ccf0-4909-8358-9924882ce843] Updated VIF entry in instance network info cache for port 993e74c8-435c-4af8-8267-003c237479c4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  1 09:46:15 compute-0 nova_compute[189491]: 2025-12-01 09:46:15.921 189495 DEBUG nova.network.neutron [req-d0263f68-242f-4a97-bf12-767bddb05e0f req-8657e4b4-8c7c-4a24-8ed3-2b15822314f9 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 4070cce8-ccf0-4909-8358-9924882ce843] Updating instance_info_cache with network_info: [{"id": "993e74c8-435c-4af8-8267-003c237479c4", "address": "fa:16:3e:7e:56:91", "network": {"id": "47f1cdb6-d949-499b-a4e6-73d3741aa9be", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-594371780-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bf37d9996bf440eb3bc55aa221d0ae6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap993e74c8-43", "ovs_interfaceid": "993e74c8-435c-4af8-8267-003c237479c4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 09:46:15 compute-0 nova_compute[189491]: 2025-12-01 09:46:15.941 189495 DEBUG oslo_concurrency.lockutils [req-d0263f68-242f-4a97-bf12-767bddb05e0f req-8657e4b4-8c7c-4a24-8ed3-2b15822314f9 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Releasing lock "refresh_cache-4070cce8-ccf0-4909-8358-9924882ce843" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 09:46:15 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:46:15.949 239843 DEBUG oslo.privsep.daemon [-] privsep: reply[589040c1-d62f-4343-a1e5-e3006d39c479]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:46:15 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:46:15.957 239843 DEBUG oslo.privsep.daemon [-] privsep: reply[c93069df-319a-4ced-b215-890331943c5c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:46:16 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:46:16.002 239843 DEBUG oslo.privsep.daemon [-] privsep: reply[d1e9ad34-15f0-4aba-b793-ca66ca6e129f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:46:16 compute-0 nova_compute[189491]: 2025-12-01 09:46:16.026 189495 DEBUG nova.compute.manager [req-9aa40849-38b1-431c-a12f-5c26d0c385d5 req-53246b80-c5f1-4bff-8ce5-5eec8c3d498a ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 7535b6dd-3ef8-4847-812d-f0a9208df287] Received event network-vif-unplugged-5f6c9141-b437-4ca0-bceb-99a3d14bb457 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 09:46:16 compute-0 nova_compute[189491]: 2025-12-01 09:46:16.027 189495 DEBUG oslo_concurrency.lockutils [req-9aa40849-38b1-431c-a12f-5c26d0c385d5 req-53246b80-c5f1-4bff-8ce5-5eec8c3d498a ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Acquiring lock "7535b6dd-3ef8-4847-812d-f0a9208df287-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:46:16 compute-0 nova_compute[189491]: 2025-12-01 09:46:16.027 189495 DEBUG oslo_concurrency.lockutils [req-9aa40849-38b1-431c-a12f-5c26d0c385d5 req-53246b80-c5f1-4bff-8ce5-5eec8c3d498a ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Lock "7535b6dd-3ef8-4847-812d-f0a9208df287-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:46:16 compute-0 nova_compute[189491]: 2025-12-01 09:46:16.028 189495 DEBUG oslo_concurrency.lockutils [req-9aa40849-38b1-431c-a12f-5c26d0c385d5 req-53246b80-c5f1-4bff-8ce5-5eec8c3d498a ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Lock "7535b6dd-3ef8-4847-812d-f0a9208df287-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:46:16 compute-0 nova_compute[189491]: 2025-12-01 09:46:16.028 189495 DEBUG nova.compute.manager [req-9aa40849-38b1-431c-a12f-5c26d0c385d5 req-53246b80-c5f1-4bff-8ce5-5eec8c3d498a ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 7535b6dd-3ef8-4847-812d-f0a9208df287] No waiting events found dispatching network-vif-unplugged-5f6c9141-b437-4ca0-bceb-99a3d14bb457 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 09:46:16 compute-0 nova_compute[189491]: 2025-12-01 09:46:16.028 189495 DEBUG nova.compute.manager [req-9aa40849-38b1-431c-a12f-5c26d0c385d5 req-53246b80-c5f1-4bff-8ce5-5eec8c3d498a ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 7535b6dd-3ef8-4847-812d-f0a9208df287] Received event network-vif-unplugged-5f6c9141-b437-4ca0-bceb-99a3d14bb457 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  1 09:46:16 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:46:16.030 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[8a5ca5ba-7320-4742-ab6a-3325c747cf6d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4f3e9b63-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:66:a3:d6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 700, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 700, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 550202, 'reachable_time': 33319, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 254986, 'error': None, 'target': 'ovnmeta-4f3e9b63-cba6-412e-ba07-d66a8b38af02', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:46:16 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:46:16.051 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[df8bf5ac-527c-4a9d-b890-0860697be01c]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap4f3e9b63-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 550218, 'tstamp': 550218}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 254987, 'error': None, 'target': 'ovnmeta-4f3e9b63-cba6-412e-ba07-d66a8b38af02', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4f3e9b63-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 550221, 'tstamp': 550221}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 254987, 'error': None, 'target': 'ovnmeta-4f3e9b63-cba6-412e-ba07-d66a8b38af02', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:46:16 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:46:16.053 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4f3e9b63-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:46:16 compute-0 nova_compute[189491]: 2025-12-01 09:46:16.054 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:46:16 compute-0 nova_compute[189491]: 2025-12-01 09:46:16.056 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:46:16 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:46:16.056 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4f3e9b63-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:46:16 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:46:16.057 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 09:46:16 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:46:16.057 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4f3e9b63-c0, col_values=(('external_ids', {'iface-id': 'a52d5841-c07f-4d57-abbb-5b84c6008243'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:46:16 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:46:16.058 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 09:46:17 compute-0 nova_compute[189491]: 2025-12-01 09:46:17.202 189495 DEBUG nova.compute.manager [req-296ab84d-c072-48e6-97bc-1d3858e1042c req-f64f4125-9e15-4cde-9e8d-8077ce539e22 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 4070cce8-ccf0-4909-8358-9924882ce843] Received event network-vif-plugged-993e74c8-435c-4af8-8267-003c237479c4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 09:46:17 compute-0 nova_compute[189491]: 2025-12-01 09:46:17.203 189495 DEBUG oslo_concurrency.lockutils [req-296ab84d-c072-48e6-97bc-1d3858e1042c req-f64f4125-9e15-4cde-9e8d-8077ce539e22 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Acquiring lock "4070cce8-ccf0-4909-8358-9924882ce843-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:46:17 compute-0 nova_compute[189491]: 2025-12-01 09:46:17.204 189495 DEBUG oslo_concurrency.lockutils [req-296ab84d-c072-48e6-97bc-1d3858e1042c req-f64f4125-9e15-4cde-9e8d-8077ce539e22 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Lock "4070cce8-ccf0-4909-8358-9924882ce843-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:46:17 compute-0 nova_compute[189491]: 2025-12-01 09:46:17.204 189495 DEBUG oslo_concurrency.lockutils [req-296ab84d-c072-48e6-97bc-1d3858e1042c req-f64f4125-9e15-4cde-9e8d-8077ce539e22 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Lock "4070cce8-ccf0-4909-8358-9924882ce843-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:46:17 compute-0 nova_compute[189491]: 2025-12-01 09:46:17.205 189495 DEBUG nova.compute.manager [req-296ab84d-c072-48e6-97bc-1d3858e1042c req-f64f4125-9e15-4cde-9e8d-8077ce539e22 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 4070cce8-ccf0-4909-8358-9924882ce843] No waiting events found dispatching network-vif-plugged-993e74c8-435c-4af8-8267-003c237479c4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 09:46:17 compute-0 nova_compute[189491]: 2025-12-01 09:46:17.205 189495 WARNING nova.compute.manager [req-296ab84d-c072-48e6-97bc-1d3858e1042c req-f64f4125-9e15-4cde-9e8d-8077ce539e22 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 4070cce8-ccf0-4909-8358-9924882ce843] Received unexpected event network-vif-plugged-993e74c8-435c-4af8-8267-003c237479c4 for instance with vm_state active and task_state None.#033[00m
Dec  1 09:46:17 compute-0 nova_compute[189491]: 2025-12-01 09:46:17.306 189495 DEBUG nova.network.neutron [-] [instance: 7535b6dd-3ef8-4847-812d-f0a9208df287] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 09:46:17 compute-0 nova_compute[189491]: 2025-12-01 09:46:17.334 189495 INFO nova.compute.manager [-] [instance: 7535b6dd-3ef8-4847-812d-f0a9208df287] Took 2.43 seconds to deallocate network for instance.#033[00m
Dec  1 09:46:17 compute-0 nova_compute[189491]: 2025-12-01 09:46:17.391 189495 DEBUG oslo_concurrency.lockutils [None req-cdea49b0-f67d-43a3-ae8a-a4c13e251e8f 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:46:17 compute-0 nova_compute[189491]: 2025-12-01 09:46:17.392 189495 DEBUG oslo_concurrency.lockutils [None req-cdea49b0-f67d-43a3-ae8a-a4c13e251e8f 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:46:17 compute-0 nova_compute[189491]: 2025-12-01 09:46:17.735 189495 DEBUG nova.compute.provider_tree [None req-cdea49b0-f67d-43a3-ae8a-a4c13e251e8f 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Inventory has not changed in ProviderTree for provider: 143c7fe7-af1f-477a-978c-6a994d785d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 09:46:17 compute-0 nova_compute[189491]: 2025-12-01 09:46:17.763 189495 DEBUG nova.scheduler.client.report [None req-cdea49b0-f67d-43a3-ae8a-a4c13e251e8f 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Inventory has not changed for provider 143c7fe7-af1f-477a-978c-6a994d785d98 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 09:46:17 compute-0 nova_compute[189491]: 2025-12-01 09:46:17.795 189495 DEBUG oslo_concurrency.lockutils [None req-cdea49b0-f67d-43a3-ae8a-a4c13e251e8f 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.402s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:46:17 compute-0 nova_compute[189491]: 2025-12-01 09:46:17.837 189495 INFO nova.scheduler.client.report [None req-cdea49b0-f67d-43a3-ae8a-a4c13e251e8f 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Deleted allocations for instance 7535b6dd-3ef8-4847-812d-f0a9208df287#033[00m
Dec  1 09:46:17 compute-0 nova_compute[189491]: 2025-12-01 09:46:17.924 189495 DEBUG oslo_concurrency.lockutils [None req-cdea49b0-f67d-43a3-ae8a-a4c13e251e8f 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Lock "7535b6dd-3ef8-4847-812d-f0a9208df287" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.497s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:46:18 compute-0 nova_compute[189491]: 2025-12-01 09:46:18.107 189495 DEBUG nova.compute.manager [req-0030e5bc-158e-4f2c-9497-2048d82b799b req-f5f7ca23-4732-4a97-a4b6-db1145d9c82f ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 7535b6dd-3ef8-4847-812d-f0a9208df287] Received event network-vif-plugged-5f6c9141-b437-4ca0-bceb-99a3d14bb457 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 09:46:18 compute-0 nova_compute[189491]: 2025-12-01 09:46:18.108 189495 DEBUG oslo_concurrency.lockutils [req-0030e5bc-158e-4f2c-9497-2048d82b799b req-f5f7ca23-4732-4a97-a4b6-db1145d9c82f ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Acquiring lock "7535b6dd-3ef8-4847-812d-f0a9208df287-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:46:18 compute-0 nova_compute[189491]: 2025-12-01 09:46:18.108 189495 DEBUG oslo_concurrency.lockutils [req-0030e5bc-158e-4f2c-9497-2048d82b799b req-f5f7ca23-4732-4a97-a4b6-db1145d9c82f ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Lock "7535b6dd-3ef8-4847-812d-f0a9208df287-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:46:18 compute-0 nova_compute[189491]: 2025-12-01 09:46:18.108 189495 DEBUG oslo_concurrency.lockutils [req-0030e5bc-158e-4f2c-9497-2048d82b799b req-f5f7ca23-4732-4a97-a4b6-db1145d9c82f ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Lock "7535b6dd-3ef8-4847-812d-f0a9208df287-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:46:18 compute-0 nova_compute[189491]: 2025-12-01 09:46:18.108 189495 DEBUG nova.compute.manager [req-0030e5bc-158e-4f2c-9497-2048d82b799b req-f5f7ca23-4732-4a97-a4b6-db1145d9c82f ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 7535b6dd-3ef8-4847-812d-f0a9208df287] No waiting events found dispatching network-vif-plugged-5f6c9141-b437-4ca0-bceb-99a3d14bb457 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 09:46:18 compute-0 nova_compute[189491]: 2025-12-01 09:46:18.110 189495 WARNING nova.compute.manager [req-0030e5bc-158e-4f2c-9497-2048d82b799b req-f5f7ca23-4732-4a97-a4b6-db1145d9c82f ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 7535b6dd-3ef8-4847-812d-f0a9208df287] Received unexpected event network-vif-plugged-5f6c9141-b437-4ca0-bceb-99a3d14bb457 for instance with vm_state deleted and task_state None.#033[00m
Dec  1 09:46:18 compute-0 nova_compute[189491]: 2025-12-01 09:46:18.203 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:46:19 compute-0 nova_compute[189491]: 2025-12-01 09:46:19.561 189495 DEBUG nova.compute.manager [req-4e676668-fca1-4840-89ae-d26c4a395553 req-978d1fe7-d305-4187-abe3-5a3980ce50a6 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 7535b6dd-3ef8-4847-812d-f0a9208df287] Received event network-vif-deleted-5f6c9141-b437-4ca0-bceb-99a3d14bb457 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 09:46:19 compute-0 nova_compute[189491]: 2025-12-01 09:46:19.658 189495 DEBUG oslo_concurrency.lockutils [None req-a46c0911-af4f-4753-b463-7788aa10c503 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] Acquiring lock "4070cce8-ccf0-4909-8358-9924882ce843" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:46:19 compute-0 nova_compute[189491]: 2025-12-01 09:46:19.662 189495 DEBUG oslo_concurrency.lockutils [None req-a46c0911-af4f-4753-b463-7788aa10c503 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] Lock "4070cce8-ccf0-4909-8358-9924882ce843" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.005s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:46:19 compute-0 nova_compute[189491]: 2025-12-01 09:46:19.664 189495 DEBUG oslo_concurrency.lockutils [None req-a46c0911-af4f-4753-b463-7788aa10c503 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] Acquiring lock "4070cce8-ccf0-4909-8358-9924882ce843-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:46:19 compute-0 nova_compute[189491]: 2025-12-01 09:46:19.667 189495 DEBUG oslo_concurrency.lockutils [None req-a46c0911-af4f-4753-b463-7788aa10c503 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] Lock "4070cce8-ccf0-4909-8358-9924882ce843-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:46:19 compute-0 nova_compute[189491]: 2025-12-01 09:46:19.668 189495 DEBUG oslo_concurrency.lockutils [None req-a46c0911-af4f-4753-b463-7788aa10c503 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] Lock "4070cce8-ccf0-4909-8358-9924882ce843-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:46:19 compute-0 nova_compute[189491]: 2025-12-01 09:46:19.677 189495 INFO nova.compute.manager [None req-a46c0911-af4f-4753-b463-7788aa10c503 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] [instance: 4070cce8-ccf0-4909-8358-9924882ce843] Terminating instance#033[00m
Dec  1 09:46:19 compute-0 nova_compute[189491]: 2025-12-01 09:46:19.686 189495 DEBUG nova.compute.manager [None req-a46c0911-af4f-4753-b463-7788aa10c503 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] [instance: 4070cce8-ccf0-4909-8358-9924882ce843] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  1 09:46:19 compute-0 kernel: tap993e74c8-43 (unregistering): left promiscuous mode
Dec  1 09:46:19 compute-0 NetworkManager[56318]: <info>  [1764582379.7350] device (tap993e74c8-43): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  1 09:46:19 compute-0 ovn_controller[97794]: 2025-12-01T09:46:19Z|00159|binding|INFO|Releasing lport 993e74c8-435c-4af8-8267-003c237479c4 from this chassis (sb_readonly=0)
Dec  1 09:46:19 compute-0 ovn_controller[97794]: 2025-12-01T09:46:19Z|00160|binding|INFO|Setting lport 993e74c8-435c-4af8-8267-003c237479c4 down in Southbound
Dec  1 09:46:19 compute-0 ovn_controller[97794]: 2025-12-01T09:46:19Z|00161|binding|INFO|Removing iface tap993e74c8-43 ovn-installed in OVS
Dec  1 09:46:19 compute-0 nova_compute[189491]: 2025-12-01 09:46:19.754 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:46:19 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:46:19.767 106659 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7e:56:91 10.100.0.7'], port_security=['fa:16:3e:7e:56:91 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '4070cce8-ccf0-4909-8358-9924882ce843', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-47f1cdb6-d949-499b-a4e6-73d3741aa9be', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0bf37d9996bf440eb3bc55aa221d0ae6', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd0067e0c-2968-4584-a28a-73f098e0f433', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=35730c51-b958-4995-99c2-7808a72f37c4, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff7f12c3670>], logical_port=993e74c8-435c-4af8-8267-003c237479c4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff7f12c3670>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 09:46:19 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:46:19.775 106659 INFO neutron.agent.ovn.metadata.agent [-] Port 993e74c8-435c-4af8-8267-003c237479c4 in datapath 47f1cdb6-d949-499b-a4e6-73d3741aa9be unbound from our chassis#033[00m
Dec  1 09:46:19 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:46:19.778 106659 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 47f1cdb6-d949-499b-a4e6-73d3741aa9be, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  1 09:46:19 compute-0 nova_compute[189491]: 2025-12-01 09:46:19.780 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:46:19 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:46:19.780 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[a28a4cfc-991d-4d24-8349-a9592fcaf32d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:46:19 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:46:19.782 106659 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-47f1cdb6-d949-499b-a4e6-73d3741aa9be namespace which is not needed anymore#033[00m
Dec  1 09:46:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:19.795 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  1 09:46:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:19.796 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  1 09:46:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:19.796 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b020>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff85046f530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:46:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:19.797 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7ff84c98b0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:46:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:19.797 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84dc55100>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff85046f530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:46:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:19.797 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff85046f530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:46:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:19.798 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff85046f530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:46:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:19.798 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff85046f530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:46:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:19.798 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84ca1c260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff85046f530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:46:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:19.798 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff85046f530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:46:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:19.798 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff85046f530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:46:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:19.798 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84ff01af0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff85046f530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:46:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:19.798 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bb30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff85046f530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:46:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:19.798 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff85046f530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:46:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:19.798 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bb60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff85046f530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:46:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:19.798 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff85046f530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:46:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:19.799 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bbc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff85046f530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:46:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:19.799 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff85046f530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:46:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:19.799 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff85046f530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:46:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:19.799 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff85046f530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:46:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:19.800 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff85046f530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:46:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:19.800 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b5f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff85046f530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:46:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:19.800 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff85046f530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:46:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:19.800 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98be60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff85046f530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:46:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:19.800 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84f216690>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff85046f530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:46:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:19.800 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c9896d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff85046f530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:46:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:19.801 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff85046f530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:46:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:19.801 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98a720>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff85046f530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:46:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:19.801 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff85046f530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:46:19 compute-0 nova_compute[189491]: 2025-12-01 09:46:19.802 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:46:19 compute-0 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000000e.scope: Deactivated successfully.
Dec  1 09:46:19 compute-0 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000000e.scope: Consumed 5.082s CPU time.
Dec  1 09:46:19 compute-0 systemd-machined[155812]: Machine qemu-15-instance-0000000e terminated.
Dec  1 09:46:19 compute-0 nova_compute[189491]: 2025-12-01 09:46:19.922 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:46:19 compute-0 nova_compute[189491]: 2025-12-01 09:46:19.930 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:46:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:19.968 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 4070cce8-ccf0-4909-8358-9924882ce843 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Dec  1 09:46:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:19.969 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/4070cce8-ccf0-4909-8358-9924882ce843 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}5b15b15c247f410e52837a95689cb091041b96c474d34a98b1d5f06140c01501" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Dec  1 09:46:19 compute-0 nova_compute[189491]: 2025-12-01 09:46:19.972 189495 INFO nova.virt.libvirt.driver [-] [instance: 4070cce8-ccf0-4909-8358-9924882ce843] Instance destroyed successfully.#033[00m
Dec  1 09:46:19 compute-0 nova_compute[189491]: 2025-12-01 09:46:19.973 189495 DEBUG nova.objects.instance [None req-a46c0911-af4f-4753-b463-7788aa10c503 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] Lazy-loading 'resources' on Instance uuid 4070cce8-ccf0-4909-8358-9924882ce843 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 09:46:19 compute-0 neutron-haproxy-ovnmeta-47f1cdb6-d949-499b-a4e6-73d3741aa9be[254952]: [NOTICE]   (254956) : haproxy version is 2.8.14-c23fe91
Dec  1 09:46:19 compute-0 neutron-haproxy-ovnmeta-47f1cdb6-d949-499b-a4e6-73d3741aa9be[254952]: [NOTICE]   (254956) : path to executable is /usr/sbin/haproxy
Dec  1 09:46:19 compute-0 neutron-haproxy-ovnmeta-47f1cdb6-d949-499b-a4e6-73d3741aa9be[254952]: [WARNING]  (254956) : Exiting Master process...
Dec  1 09:46:19 compute-0 neutron-haproxy-ovnmeta-47f1cdb6-d949-499b-a4e6-73d3741aa9be[254952]: [WARNING]  (254956) : Exiting Master process...
Dec  1 09:46:19 compute-0 neutron-haproxy-ovnmeta-47f1cdb6-d949-499b-a4e6-73d3741aa9be[254952]: [ALERT]    (254956) : Current worker (254958) exited with code 143 (Terminated)
Dec  1 09:46:19 compute-0 neutron-haproxy-ovnmeta-47f1cdb6-d949-499b-a4e6-73d3741aa9be[254952]: [WARNING]  (254956) : All workers exited. Exiting... (0)
Dec  1 09:46:19 compute-0 nova_compute[189491]: 2025-12-01 09:46:19.990 189495 DEBUG nova.virt.libvirt.vif [None req-a46c0911-af4f-4753-b463-7788aa10c503 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-01T09:46:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-656807043',display_name='tempest-ServerAddressesTestJSON-server-656807043',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-656807043',id=14,image_ref='7ddeffd1-d06f-4a46-9e41-114974daa90e',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-01T09:46:15Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0bf37d9996bf440eb3bc55aa221d0ae6',ramdisk_id='',reservation_id='r-6vueod18',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7ddeffd1-d06f-4a46-9e41-114974daa90e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerAddressesTestJSON-1747833979',owner_user_name='tempest-ServerAddressesTestJSON-1747833979-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-01T09:46:15Z,user_data=None,user_id='d64b3ffc20d34dd5af4018e4ea24dabd',uuid=4070cce8-ccf0-4909-8358-9924882ce843,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "993e74c8-435c-4af8-8267-003c237479c4", "address": "fa:16:3e:7e:56:91", "network": {"id": "47f1cdb6-d949-499b-a4e6-73d3741aa9be", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-594371780-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bf37d9996bf440eb3bc55aa221d0ae6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap993e74c8-43", "ovs_interfaceid": "993e74c8-435c-4af8-8267-003c237479c4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  1 09:46:19 compute-0 nova_compute[189491]: 2025-12-01 09:46:19.991 189495 DEBUG nova.network.os_vif_util [None req-a46c0911-af4f-4753-b463-7788aa10c503 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] Converting VIF {"id": "993e74c8-435c-4af8-8267-003c237479c4", "address": "fa:16:3e:7e:56:91", "network": {"id": "47f1cdb6-d949-499b-a4e6-73d3741aa9be", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-594371780-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bf37d9996bf440eb3bc55aa221d0ae6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap993e74c8-43", "ovs_interfaceid": "993e74c8-435c-4af8-8267-003c237479c4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 09:46:19 compute-0 nova_compute[189491]: 2025-12-01 09:46:19.992 189495 DEBUG nova.network.os_vif_util [None req-a46c0911-af4f-4753-b463-7788aa10c503 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7e:56:91,bridge_name='br-int',has_traffic_filtering=True,id=993e74c8-435c-4af8-8267-003c237479c4,network=Network(47f1cdb6-d949-499b-a4e6-73d3741aa9be),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap993e74c8-43') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 09:46:19 compute-0 nova_compute[189491]: 2025-12-01 09:46:19.992 189495 DEBUG os_vif [None req-a46c0911-af4f-4753-b463-7788aa10c503 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7e:56:91,bridge_name='br-int',has_traffic_filtering=True,id=993e74c8-435c-4af8-8267-003c237479c4,network=Network(47f1cdb6-d949-499b-a4e6-73d3741aa9be),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap993e74c8-43') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  1 09:46:19 compute-0 nova_compute[189491]: 2025-12-01 09:46:19.993 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:46:19 compute-0 nova_compute[189491]: 2025-12-01 09:46:19.993 189495 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap993e74c8-43, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:46:19 compute-0 nova_compute[189491]: 2025-12-01 09:46:19.995 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:46:19 compute-0 systemd[1]: libpod-8e689a7231d87ca9564c5223ac259c551750307109b8b3d6999edf4aba3159aa.scope: Deactivated successfully.
Dec  1 09:46:20 compute-0 nova_compute[189491]: 2025-12-01 09:46:20.001 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  1 09:46:20 compute-0 podman[255013]: 2025-12-01 09:46:20.002538412 +0000 UTC m=+0.087878086 container died 8e689a7231d87ca9564c5223ac259c551750307109b8b3d6999edf4aba3159aa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-47f1cdb6-d949-499b-a4e6-73d3741aa9be, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 09:46:20 compute-0 nova_compute[189491]: 2025-12-01 09:46:20.004 189495 INFO os_vif [None req-a46c0911-af4f-4753-b463-7788aa10c503 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7e:56:91,bridge_name='br-int',has_traffic_filtering=True,id=993e74c8-435c-4af8-8267-003c237479c4,network=Network(47f1cdb6-d949-499b-a4e6-73d3741aa9be),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap993e74c8-43')#033[00m
Dec  1 09:46:20 compute-0 nova_compute[189491]: 2025-12-01 09:46:20.005 189495 INFO nova.virt.libvirt.driver [None req-a46c0911-af4f-4753-b463-7788aa10c503 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] [instance: 4070cce8-ccf0-4909-8358-9924882ce843] Deleting instance files /var/lib/nova/instances/4070cce8-ccf0-4909-8358-9924882ce843_del#033[00m
Dec  1 09:46:20 compute-0 nova_compute[189491]: 2025-12-01 09:46:20.006 189495 INFO nova.virt.libvirt.driver [None req-a46c0911-af4f-4753-b463-7788aa10c503 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] [instance: 4070cce8-ccf0-4909-8358-9924882ce843] Deletion of /var/lib/nova/instances/4070cce8-ccf0-4909-8358-9924882ce843_del complete#033[00m
Dec  1 09:46:20 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8e689a7231d87ca9564c5223ac259c551750307109b8b3d6999edf4aba3159aa-userdata-shm.mount: Deactivated successfully.
Dec  1 09:46:20 compute-0 nova_compute[189491]: 2025-12-01 09:46:20.063 189495 INFO nova.compute.manager [None req-a46c0911-af4f-4753-b463-7788aa10c503 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] [instance: 4070cce8-ccf0-4909-8358-9924882ce843] Took 0.38 seconds to destroy the instance on the hypervisor.#033[00m
Dec  1 09:46:20 compute-0 nova_compute[189491]: 2025-12-01 09:46:20.063 189495 DEBUG oslo.service.loopingcall [None req-a46c0911-af4f-4753-b463-7788aa10c503 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  1 09:46:20 compute-0 nova_compute[189491]: 2025-12-01 09:46:20.064 189495 DEBUG nova.compute.manager [-] [instance: 4070cce8-ccf0-4909-8358-9924882ce843] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  1 09:46:20 compute-0 nova_compute[189491]: 2025-12-01 09:46:20.064 189495 DEBUG nova.network.neutron [-] [instance: 4070cce8-ccf0-4909-8358-9924882ce843] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  1 09:46:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-cf39678119052d5766042772b0b79c5b92284461163701f4946f2ff80546f257-merged.mount: Deactivated successfully.
Dec  1 09:46:20 compute-0 podman[255013]: 2025-12-01 09:46:20.080469125 +0000 UTC m=+0.165808799 container cleanup 8e689a7231d87ca9564c5223ac259c551750307109b8b3d6999edf4aba3159aa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-47f1cdb6-d949-499b-a4e6-73d3741aa9be, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 09:46:20 compute-0 systemd[1]: libpod-conmon-8e689a7231d87ca9564c5223ac259c551750307109b8b3d6999edf4aba3159aa.scope: Deactivated successfully.
Dec  1 09:46:20 compute-0 podman[255059]: 2025-12-01 09:46:20.267918742 +0000 UTC m=+0.154390340 container remove 8e689a7231d87ca9564c5223ac259c551750307109b8b3d6999edf4aba3159aa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-47f1cdb6-d949-499b-a4e6-73d3741aa9be, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec  1 09:46:20 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:46:20.281 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[0e2e921d-f0b7-465b-ba31-0c644b712458]: (4, ('Mon Dec  1 09:46:19 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-47f1cdb6-d949-499b-a4e6-73d3741aa9be (8e689a7231d87ca9564c5223ac259c551750307109b8b3d6999edf4aba3159aa)\n8e689a7231d87ca9564c5223ac259c551750307109b8b3d6999edf4aba3159aa\nMon Dec  1 09:46:20 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-47f1cdb6-d949-499b-a4e6-73d3741aa9be (8e689a7231d87ca9564c5223ac259c551750307109b8b3d6999edf4aba3159aa)\n8e689a7231d87ca9564c5223ac259c551750307109b8b3d6999edf4aba3159aa\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:46:20 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:46:20.288 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[6696a4ec-4719-4a98-8df6-f6dd31d40b05]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:46:20 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:46:20.290 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap47f1cdb6-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:46:20 compute-0 kernel: tap47f1cdb6-d0: left promiscuous mode
Dec  1 09:46:20 compute-0 nova_compute[189491]: 2025-12-01 09:46:20.295 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:46:20 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:46:20.312 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[26a92690-61d6-4394-9005-82bfcceef79c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:46:20 compute-0 nova_compute[189491]: 2025-12-01 09:46:20.314 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:46:20 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:46:20.333 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[c0f31fb9-4085-467b-926d-986bcbf10b36]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:46:20 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:46:20.338 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[b45177ad-5ad7-4693-a019-85acc5241e48]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:46:20 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:46:20.357 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[5bb9388d-364b-4933-9d62-1a7c35ba05ec]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 561392, 'reachable_time': 17937, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 255082, 'error': None, 'target': 'ovnmeta-47f1cdb6-d949-499b-a4e6-73d3741aa9be', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:46:20 compute-0 systemd[1]: run-netns-ovnmeta\x2d47f1cdb6\x2dd949\x2d499b\x2da4e6\x2d73d3741aa9be.mount: Deactivated successfully.
Dec  1 09:46:20 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:46:20.363 106797 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-47f1cdb6-d949-499b-a4e6-73d3741aa9be deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  1 09:46:20 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:46:20.363 106797 DEBUG oslo.privsep.daemon [-] privsep: reply[7c56f2be-5b3b-4316-88dc-ceee49edf1cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:46:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:20.551 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1817 Content-Type: application/json Date: Mon, 01 Dec 2025 09:46:19 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-2185aa9d-aeea-4f56-82b7-2c21d8e8d4ab x-openstack-request-id: req-2185aa9d-aeea-4f56-82b7-2c21d8e8d4ab _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Dec  1 09:46:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:20.551 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "4070cce8-ccf0-4909-8358-9924882ce843", "name": "tempest-ServerAddressesTestJSON-server-656807043", "status": "ACTIVE", "tenant_id": "0bf37d9996bf440eb3bc55aa221d0ae6", "user_id": "d64b3ffc20d34dd5af4018e4ea24dabd", "metadata": {}, "hostId": "0f2dc60325e61316044cc1e99383a26b0c2c7e5f165b77d60ee5a0bc", "image": {"id": "7ddeffd1-d06f-4a46-9e41-114974daa90e", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/7ddeffd1-d06f-4a46-9e41-114974daa90e"}]}, "flavor": {"id": "422f041c-a187-4aa2-8167-37f3eb0e89c2", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/422f041c-a187-4aa2-8167-37f3eb0e89c2"}]}, "created": "2025-12-01T09:46:07Z", "updated": "2025-12-01T09:46:20Z", "addresses": {"tempest-ServerAddressesTestJSON-594371780-network": [{"version": 4, "addr": "10.100.0.7", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:7e:56:91"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/4070cce8-ccf0-4909-8358-9924882ce843"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/4070cce8-ccf0-4909-8358-9924882ce843"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-12-01T09:46:15.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "default"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-0000000e", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": "deleting", "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Dec  1 09:46:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:20.551 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/4070cce8-ccf0-4909-8358-9924882ce843 used request id req-2185aa9d-aeea-4f56-82b7-2c21d8e8d4ab request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Dec  1 09:46:20 compute-0 ceilometer_agent_compute[200222]: libvirt: QEMU Driver error : Domain not found: no domain with matching uuid '4070cce8-ccf0-4909-8358-9924882ce843' (instance-0000000e)
Dec  1 09:46:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:20.557 14 ERROR ceilometer.polling.manager [-] Unable to discover resources: Domain not found: no domain with matching uuid '4070cce8-ccf0-4909-8358-9924882ce843' (instance-0000000e): libvirt.libvirtError: Domain not found: no domain with matching uuid '4070cce8-ccf0-4909-8358-9924882ce843' (instance-0000000e)
Dec  1 09:46:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:20.557 14 ERROR ceilometer.polling.manager Traceback (most recent call last):
Dec  1 09:46:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:20.557 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/polling/manager.py", line 959, in discover
Dec  1 09:46:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:20.557 14 ERROR ceilometer.polling.manager     discovered = discoverer.discover(self, param)
Dec  1 09:46:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:20.557 14 ERROR ceilometer.polling.manager                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Dec  1 09:46:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:20.557 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 125, in discover
Dec  1 09:46:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:20.557 14 ERROR ceilometer.polling.manager     return self.discover_libvirt_polling(manager, param=None)
Dec  1 09:46:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:20.557 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Dec  1 09:46:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:20.557 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 289, in wrapped_f
Dec  1 09:46:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:20.557 14 ERROR ceilometer.polling.manager     return self(f, *args, **kw)
Dec  1 09:46:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:20.557 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^
Dec  1 09:46:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:20.557 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 379, in __call__
Dec  1 09:46:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:20.557 14 ERROR ceilometer.polling.manager     do = self.iter(retry_state=retry_state)
Dec  1 09:46:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:20.557 14 ERROR ceilometer.polling.manager          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Dec  1 09:46:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:20.557 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 314, in iter
Dec  1 09:46:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:20.557 14 ERROR ceilometer.polling.manager     return fut.result()
Dec  1 09:46:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:20.557 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^
Dec  1 09:46:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:20.557 14 ERROR ceilometer.polling.manager   File "/usr/lib64/python3.12/concurrent/futures/_base.py", line 449, in result
Dec  1 09:46:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:20.557 14 ERROR ceilometer.polling.manager     return self.__get_result()
Dec  1 09:46:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:20.557 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^
Dec  1 09:46:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:20.557 14 ERROR ceilometer.polling.manager   File "/usr/lib64/python3.12/concurrent/futures/_base.py", line 401, in __get_result
Dec  1 09:46:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:20.557 14 ERROR ceilometer.polling.manager     raise self._exception
Dec  1 09:46:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:20.557 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 382, in __call__
Dec  1 09:46:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:20.557 14 ERROR ceilometer.polling.manager     result = fn(*args, **kwargs)
Dec  1 09:46:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:20.557 14 ERROR ceilometer.polling.manager              ^^^^^^^^^^^^^^^^^^^
Dec  1 09:46:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:20.557 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 274, in discover_libvirt_polling
Dec  1 09:46:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:20.557 14 ERROR ceilometer.polling.manager     dom_state = domain.state()[0]
Dec  1 09:46:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:20.557 14 ERROR ceilometer.polling.manager                 ^^^^^^^^^^^^^^
Dec  1 09:46:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:20.557 14 ERROR ceilometer.polling.manager   File "/usr/lib64/python3.12/site-packages/libvirt.py", line 3266, in state
Dec  1 09:46:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:20.557 14 ERROR ceilometer.polling.manager     raise libvirtError('virDomainGetState() failed')
Dec  1 09:46:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:20.557 14 ERROR ceilometer.polling.manager libvirt.libvirtError: Domain not found: no domain with matching uuid '4070cce8-ccf0-4909-8358-9924882ce843' (instance-0000000e)
Dec  1 09:46:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:20.557 14 ERROR ceilometer.polling.manager 
Dec  1 09:46:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:20.560 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:46:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:20.560 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7ff8501e1d00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:46:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:20.566 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 70f48496-14bd-4e6f-8706-262d8e6b9510 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Dec  1 09:46:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:20.567 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/70f48496-14bd-4e6f-8706-262d8e6b9510 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}5b15b15c247f410e52837a95689cb091041b96c474d34a98b1d5f06140c01501" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Dec  1 09:46:20 compute-0 nova_compute[189491]: 2025-12-01 09:46:20.693 189495 DEBUG oslo_concurrency.lockutils [None req-1ecdd1ef-1b48-45d9-9b8d-0d2c702701b6 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Acquiring lock "70f48496-14bd-4e6f-8706-262d8e6b9510" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:46:20 compute-0 nova_compute[189491]: 2025-12-01 09:46:20.693 189495 DEBUG oslo_concurrency.lockutils [None req-1ecdd1ef-1b48-45d9-9b8d-0d2c702701b6 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Lock "70f48496-14bd-4e6f-8706-262d8e6b9510" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:46:20 compute-0 nova_compute[189491]: 2025-12-01 09:46:20.694 189495 DEBUG oslo_concurrency.lockutils [None req-1ecdd1ef-1b48-45d9-9b8d-0d2c702701b6 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Acquiring lock "70f48496-14bd-4e6f-8706-262d8e6b9510-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:46:20 compute-0 nova_compute[189491]: 2025-12-01 09:46:20.694 189495 DEBUG oslo_concurrency.lockutils [None req-1ecdd1ef-1b48-45d9-9b8d-0d2c702701b6 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Lock "70f48496-14bd-4e6f-8706-262d8e6b9510-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:46:20 compute-0 nova_compute[189491]: 2025-12-01 09:46:20.694 189495 DEBUG oslo_concurrency.lockutils [None req-1ecdd1ef-1b48-45d9-9b8d-0d2c702701b6 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Lock "70f48496-14bd-4e6f-8706-262d8e6b9510-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:46:20 compute-0 nova_compute[189491]: 2025-12-01 09:46:20.695 189495 INFO nova.compute.manager [None req-1ecdd1ef-1b48-45d9-9b8d-0d2c702701b6 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] [instance: 70f48496-14bd-4e6f-8706-262d8e6b9510] Terminating instance#033[00m
Dec  1 09:46:20 compute-0 nova_compute[189491]: 2025-12-01 09:46:20.696 189495 DEBUG nova.compute.manager [None req-1ecdd1ef-1b48-45d9-9b8d-0d2c702701b6 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] [instance: 70f48496-14bd-4e6f-8706-262d8e6b9510] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  1 09:46:20 compute-0 nova_compute[189491]: 2025-12-01 09:46:20.720 189495 DEBUG nova.network.neutron [-] [instance: 4070cce8-ccf0-4909-8358-9924882ce843] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 09:46:20 compute-0 kernel: tap9ba63f14-2e (unregistering): left promiscuous mode
Dec  1 09:46:20 compute-0 NetworkManager[56318]: <info>  [1764582380.7289] device (tap9ba63f14-2e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  1 09:46:20 compute-0 nova_compute[189491]: 2025-12-01 09:46:20.733 189495 INFO nova.compute.manager [-] [instance: 4070cce8-ccf0-4909-8358-9924882ce843] Took 0.67 seconds to deallocate network for instance.#033[00m
Dec  1 09:46:20 compute-0 ovn_controller[97794]: 2025-12-01T09:46:20Z|00162|binding|INFO|Releasing lport 9ba63f14-2eaa-45bf-8c16-59bd3a7893de from this chassis (sb_readonly=0)
Dec  1 09:46:20 compute-0 nova_compute[189491]: 2025-12-01 09:46:20.744 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:46:20 compute-0 ovn_controller[97794]: 2025-12-01T09:46:20Z|00163|binding|INFO|Setting lport 9ba63f14-2eaa-45bf-8c16-59bd3a7893de down in Southbound
Dec  1 09:46:20 compute-0 ovn_controller[97794]: 2025-12-01T09:46:20Z|00164|binding|INFO|Removing iface tap9ba63f14-2e ovn-installed in OVS
Dec  1 09:46:20 compute-0 nova_compute[189491]: 2025-12-01 09:46:20.749 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:46:20 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:46:20.753 106659 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:06:a3:58 10.100.0.10'], port_security=['fa:16:3e:06:a3:58 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '70f48496-14bd-4e6f-8706-262d8e6b9510', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4f3e9b63-cba6-412e-ba07-d66a8b38af02', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ee60ff0d117e468aa42c7d39022568ea', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9f9efef3-36d7-485c-9abd-714c5dc93256', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=45465482-a276-408a-8d6b-656a92e66817, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff7f12c3670>], logical_port=9ba63f14-2eaa-45bf-8c16-59bd3a7893de) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff7f12c3670>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 09:46:20 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:46:20.754 106659 INFO neutron.agent.ovn.metadata.agent [-] Port 9ba63f14-2eaa-45bf-8c16-59bd3a7893de in datapath 4f3e9b63-cba6-412e-ba07-d66a8b38af02 unbound from our chassis#033[00m
Dec  1 09:46:20 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:46:20.755 106659 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4f3e9b63-cba6-412e-ba07-d66a8b38af02, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  1 09:46:20 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:46:20.757 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[e48ea7c4-334e-4939-b656-0e7205aa5732]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:46:20 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:46:20.757 106659 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4f3e9b63-cba6-412e-ba07-d66a8b38af02 namespace which is not needed anymore#033[00m
Dec  1 09:46:20 compute-0 nova_compute[189491]: 2025-12-01 09:46:20.767 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:46:20 compute-0 nova_compute[189491]: 2025-12-01 09:46:20.791 189495 DEBUG oslo_concurrency.lockutils [None req-a46c0911-af4f-4753-b463-7788aa10c503 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:46:20 compute-0 nova_compute[189491]: 2025-12-01 09:46:20.792 189495 DEBUG oslo_concurrency.lockutils [None req-a46c0911-af4f-4753-b463-7788aa10c503 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:46:20 compute-0 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000009.scope: Deactivated successfully.
Dec  1 09:46:20 compute-0 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000009.scope: Consumed 44.688s CPU time.
Dec  1 09:46:20 compute-0 systemd-machined[155812]: Machine qemu-9-instance-00000009 terminated.
Dec  1 09:46:20 compute-0 nova_compute[189491]: 2025-12-01 09:46:20.890 189495 DEBUG nova.compute.provider_tree [None req-a46c0911-af4f-4753-b463-7788aa10c503 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] Inventory has not changed in ProviderTree for provider: 143c7fe7-af1f-477a-978c-6a994d785d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 09:46:20 compute-0 nova_compute[189491]: 2025-12-01 09:46:20.911 189495 DEBUG nova.scheduler.client.report [None req-a46c0911-af4f-4753-b463-7788aa10c503 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] Inventory has not changed for provider 143c7fe7-af1f-477a-978c-6a994d785d98 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 09:46:20 compute-0 nova_compute[189491]: 2025-12-01 09:46:20.930 189495 DEBUG oslo_concurrency.lockutils [None req-a46c0911-af4f-4753-b463-7788aa10c503 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.138s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:46:20 compute-0 nova_compute[189491]: 2025-12-01 09:46:20.956 189495 INFO nova.scheduler.client.report [None req-a46c0911-af4f-4753-b463-7788aa10c503 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] Deleted allocations for instance 4070cce8-ccf0-4909-8358-9924882ce843#033[00m
Dec  1 09:46:20 compute-0 nova_compute[189491]: 2025-12-01 09:46:20.967 189495 INFO nova.virt.libvirt.driver [-] [instance: 70f48496-14bd-4e6f-8706-262d8e6b9510] Instance destroyed successfully.#033[00m
Dec  1 09:46:20 compute-0 nova_compute[189491]: 2025-12-01 09:46:20.968 189495 DEBUG nova.objects.instance [None req-1ecdd1ef-1b48-45d9-9b8d-0d2c702701b6 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Lazy-loading 'resources' on Instance uuid 70f48496-14bd-4e6f-8706-262d8e6b9510 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 09:46:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:20.969 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1858 Content-Type: application/json Date: Mon, 01 Dec 2025 09:46:20 GMT Keep-Alive: timeout=5, max=99 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-e76bca26-e957-4363-9271-d3e271b0b3cb x-openstack-request-id: req-e76bca26-e957-4363-9271-d3e271b0b3cb _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Dec  1 09:46:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:20.970 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "70f48496-14bd-4e6f-8706-262d8e6b9510", "name": "tempest-TestNetworkBasicOps-server-943973460", "status": "ACTIVE", "tenant_id": "ee60ff0d117e468aa42c7d39022568ea", "user_id": "3f19699d7cb4493292a31daef496a1c2", "metadata": {}, "hostId": "a22237af3df9d3f094f018069e77419823f897bb1e616d07da72bc47", "image": {"id": "7ddeffd1-d06f-4a46-9e41-114974daa90e", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/7ddeffd1-d06f-4a46-9e41-114974daa90e"}]}, "flavor": {"id": "422f041c-a187-4aa2-8167-37f3eb0e89c2", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/422f041c-a187-4aa2-8167-37f3eb0e89c2"}]}, "created": "2025-12-01T09:44:12Z", "updated": "2025-12-01T09:46:20Z", "addresses": {"tempest-network-smoke--1085714181": [{"version": 4, "addr": "10.100.0.10", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:06:a3:58"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/70f48496-14bd-4e6f-8706-262d8e6b9510"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/70f48496-14bd-4e6f-8706-262d8e6b9510"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": "tempest-TestNetworkBasicOps-240726540", "OS-SRV-USG:launched_at": "2025-12-01T09:44:29.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "tempest-secgroup-smoke-187856253"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000009", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": "deleting", "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Dec  1 09:46:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:20.970 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/70f48496-14bd-4e6f-8706-262d8e6b9510 used request id req-e76bca26-e957-4363-9271-d3e271b0b3cb request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Dec  1 09:46:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:20.973 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '70f48496-14bd-4e6f-8706-262d8e6b9510', 'name': 'tempest-TestNetworkBasicOps-server-943973460', 'flavor': {'id': '422f041c-a187-4aa2-8167-37f3eb0e89c2', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '7ddeffd1-d06f-4a46-9e41-114974daa90e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000009', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'shutdown', 'tenant_id': 'ee60ff0d117e468aa42c7d39022568ea', 'user_id': '3f19699d7cb4493292a31daef496a1c2', 'hostId': 'a22237af3df9d3f094f018069e77419823f897bb1e616d07da72bc47', 'status': 'stopped', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  1 09:46:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:20.978 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance dc0d510c-4baf-4bcb-ab4f-de6ee48849c0 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Dec  1 09:46:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:20.979 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/dc0d510c-4baf-4bcb-ab4f-de6ee48849c0 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}5b15b15c247f410e52837a95689cb091041b96c474d34a98b1d5f06140c01501" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Dec  1 09:46:20 compute-0 neutron-haproxy-ovnmeta-4f3e9b63-cba6-412e-ba07-d66a8b38af02[253147]: [NOTICE]   (253151) : haproxy version is 2.8.14-c23fe91
Dec  1 09:46:20 compute-0 neutron-haproxy-ovnmeta-4f3e9b63-cba6-412e-ba07-d66a8b38af02[253147]: [NOTICE]   (253151) : path to executable is /usr/sbin/haproxy
Dec  1 09:46:20 compute-0 neutron-haproxy-ovnmeta-4f3e9b63-cba6-412e-ba07-d66a8b38af02[253147]: [WARNING]  (253151) : Exiting Master process...
Dec  1 09:46:20 compute-0 neutron-haproxy-ovnmeta-4f3e9b63-cba6-412e-ba07-d66a8b38af02[253147]: [WARNING]  (253151) : Exiting Master process...
Dec  1 09:46:20 compute-0 neutron-haproxy-ovnmeta-4f3e9b63-cba6-412e-ba07-d66a8b38af02[253147]: [ALERT]    (253151) : Current worker (253153) exited with code 143 (Terminated)
Dec  1 09:46:20 compute-0 neutron-haproxy-ovnmeta-4f3e9b63-cba6-412e-ba07-d66a8b38af02[253147]: [WARNING]  (253151) : All workers exited. Exiting... (0)
Dec  1 09:46:20 compute-0 systemd[1]: libpod-dcf6631e40eaa40eb9680472c1f7076f93e81d77eb3ac911827c176524361282.scope: Deactivated successfully.
Dec  1 09:46:20 compute-0 nova_compute[189491]: 2025-12-01 09:46:20.987 189495 DEBUG nova.virt.libvirt.vif [None req-1ecdd1ef-1b48-45d9-9b8d-0d2c702701b6 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-01T09:44:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-943973460',display_name='tempest-TestNetworkBasicOps-server-943973460',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-943973460',id=9,image_ref='7ddeffd1-d06f-4a46-9e41-114974daa90e',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF0zO5PaN3W4VHI3MwtcjwVXnFCS2bVnALc/xgvovRqym1jyHZHeVTr6rztYp8+lLKApFr2SvhwBydda3c7yRYWVMdYesl/HUKsBijWwjyOiRwFrk6mYhv5XoI8BDBYXvw==',key_name='tempest-TestNetworkBasicOps-240726540',keypairs=<?>,launch_index=0,launched_at=2025-12-01T09:44:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ee60ff0d117e468aa42c7d39022568ea',ramdisk_id='',reservation_id='r-fqfqoply',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7ddeffd1-d06f-4a46-9e41-114974daa90e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-291434657',owner_user_name='tempest-TestNetworkBasicOps-291434657-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-01T09:44:29Z,user_data=None,user_id='3f19699d7cb4493292a31daef496a1c2',uuid=70f48496-14bd-4e6f-8706-262d8e6b9510,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9ba63f14-2eaa-45bf-8c16-59bd3a7893de", "address": "fa:16:3e:06:a3:58", "network": {"id": "4f3e9b63-cba6-412e-ba07-d66a8b38af02", "bridge": "br-int", "label": "tempest-network-smoke--1085714181", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee60ff0d117e468aa42c7d39022568ea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ba63f14-2e", "ovs_interfaceid": "9ba63f14-2eaa-45bf-8c16-59bd3a7893de", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  1 09:46:20 compute-0 nova_compute[189491]: 2025-12-01 09:46:20.988 189495 DEBUG nova.network.os_vif_util [None req-1ecdd1ef-1b48-45d9-9b8d-0d2c702701b6 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Converting VIF {"id": "9ba63f14-2eaa-45bf-8c16-59bd3a7893de", "address": "fa:16:3e:06:a3:58", "network": {"id": "4f3e9b63-cba6-412e-ba07-d66a8b38af02", "bridge": "br-int", "label": "tempest-network-smoke--1085714181", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee60ff0d117e468aa42c7d39022568ea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ba63f14-2e", "ovs_interfaceid": "9ba63f14-2eaa-45bf-8c16-59bd3a7893de", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 09:46:20 compute-0 nova_compute[189491]: 2025-12-01 09:46:20.988 189495 DEBUG nova.network.os_vif_util [None req-1ecdd1ef-1b48-45d9-9b8d-0d2c702701b6 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:06:a3:58,bridge_name='br-int',has_traffic_filtering=True,id=9ba63f14-2eaa-45bf-8c16-59bd3a7893de,network=Network(4f3e9b63-cba6-412e-ba07-d66a8b38af02),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9ba63f14-2e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 09:46:20 compute-0 nova_compute[189491]: 2025-12-01 09:46:20.988 189495 DEBUG os_vif [None req-1ecdd1ef-1b48-45d9-9b8d-0d2c702701b6 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:06:a3:58,bridge_name='br-int',has_traffic_filtering=True,id=9ba63f14-2eaa-45bf-8c16-59bd3a7893de,network=Network(4f3e9b63-cba6-412e-ba07-d66a8b38af02),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9ba63f14-2e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  1 09:46:20 compute-0 nova_compute[189491]: 2025-12-01 09:46:20.989 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:46:20 compute-0 nova_compute[189491]: 2025-12-01 09:46:20.990 189495 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9ba63f14-2e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:46:20 compute-0 nova_compute[189491]: 2025-12-01 09:46:20.991 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:46:20 compute-0 podman[255101]: 2025-12-01 09:46:20.993540691 +0000 UTC m=+0.091839114 container died dcf6631e40eaa40eb9680472c1f7076f93e81d77eb3ac911827c176524361282 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4f3e9b63-cba6-412e-ba07-d66a8b38af02, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 09:46:20 compute-0 nova_compute[189491]: 2025-12-01 09:46:20.993 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:46:20 compute-0 nova_compute[189491]: 2025-12-01 09:46:20.995 189495 INFO os_vif [None req-1ecdd1ef-1b48-45d9-9b8d-0d2c702701b6 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:06:a3:58,bridge_name='br-int',has_traffic_filtering=True,id=9ba63f14-2eaa-45bf-8c16-59bd3a7893de,network=Network(4f3e9b63-cba6-412e-ba07-d66a8b38af02),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9ba63f14-2e')#033[00m
Dec  1 09:46:20 compute-0 nova_compute[189491]: 2025-12-01 09:46:20.996 189495 INFO nova.virt.libvirt.driver [None req-1ecdd1ef-1b48-45d9-9b8d-0d2c702701b6 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] [instance: 70f48496-14bd-4e6f-8706-262d8e6b9510] Deleting instance files /var/lib/nova/instances/70f48496-14bd-4e6f-8706-262d8e6b9510_del#033[00m
Dec  1 09:46:20 compute-0 nova_compute[189491]: 2025-12-01 09:46:20.996 189495 INFO nova.virt.libvirt.driver [None req-1ecdd1ef-1b48-45d9-9b8d-0d2c702701b6 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] [instance: 70f48496-14bd-4e6f-8706-262d8e6b9510] Deletion of /var/lib/nova/instances/70f48496-14bd-4e6f-8706-262d8e6b9510_del complete#033[00m
Dec  1 09:46:21 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-dcf6631e40eaa40eb9680472c1f7076f93e81d77eb3ac911827c176524361282-userdata-shm.mount: Deactivated successfully.
Dec  1 09:46:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-1b9d8db5ea65397090274cb271ee72eb5b09427fda6aef892f13909644fe44cd-merged.mount: Deactivated successfully.
Dec  1 09:46:21 compute-0 nova_compute[189491]: 2025-12-01 09:46:21.050 189495 DEBUG oslo_concurrency.lockutils [None req-a46c0911-af4f-4753-b463-7788aa10c503 d64b3ffc20d34dd5af4018e4ea24dabd 0bf37d9996bf440eb3bc55aa221d0ae6 - - default default] Lock "4070cce8-ccf0-4909-8358-9924882ce843" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.388s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:46:21 compute-0 podman[255101]: 2025-12-01 09:46:21.056389586 +0000 UTC m=+0.154688009 container cleanup dcf6631e40eaa40eb9680472c1f7076f93e81d77eb3ac911827c176524361282 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4f3e9b63-cba6-412e-ba07-d66a8b38af02, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 09:46:21 compute-0 systemd[1]: libpod-conmon-dcf6631e40eaa40eb9680472c1f7076f93e81d77eb3ac911827c176524361282.scope: Deactivated successfully.
Dec  1 09:46:21 compute-0 nova_compute[189491]: 2025-12-01 09:46:21.086 189495 INFO nova.compute.manager [None req-1ecdd1ef-1b48-45d9-9b8d-0d2c702701b6 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] [instance: 70f48496-14bd-4e6f-8706-262d8e6b9510] Took 0.39 seconds to destroy the instance on the hypervisor.#033[00m
Dec  1 09:46:21 compute-0 nova_compute[189491]: 2025-12-01 09:46:21.086 189495 DEBUG oslo.service.loopingcall [None req-1ecdd1ef-1b48-45d9-9b8d-0d2c702701b6 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  1 09:46:21 compute-0 nova_compute[189491]: 2025-12-01 09:46:21.086 189495 DEBUG nova.compute.manager [-] [instance: 70f48496-14bd-4e6f-8706-262d8e6b9510] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  1 09:46:21 compute-0 nova_compute[189491]: 2025-12-01 09:46:21.087 189495 DEBUG nova.network.neutron [-] [instance: 70f48496-14bd-4e6f-8706-262d8e6b9510] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  1 09:46:21 compute-0 podman[255148]: 2025-12-01 09:46:21.14826633 +0000 UTC m=+0.062375415 container remove dcf6631e40eaa40eb9680472c1f7076f93e81d77eb3ac911827c176524361282 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4f3e9b63-cba6-412e-ba07-d66a8b38af02, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true)
Dec  1 09:46:21 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:46:21.158 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[e72aa300-73ee-4f11-8d4c-518f4a350686]: (4, ('Mon Dec  1 09:46:20 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4f3e9b63-cba6-412e-ba07-d66a8b38af02 (dcf6631e40eaa40eb9680472c1f7076f93e81d77eb3ac911827c176524361282)\ndcf6631e40eaa40eb9680472c1f7076f93e81d77eb3ac911827c176524361282\nMon Dec  1 09:46:21 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4f3e9b63-cba6-412e-ba07-d66a8b38af02 (dcf6631e40eaa40eb9680472c1f7076f93e81d77eb3ac911827c176524361282)\ndcf6631e40eaa40eb9680472c1f7076f93e81d77eb3ac911827c176524361282\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:46:21 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:46:21.165 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[52a32e3d-8012-4edf-b9da-bb79f8cfc2e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:46:21 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:46:21.167 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4f3e9b63-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:46:21 compute-0 kernel: tap4f3e9b63-c0: left promiscuous mode
Dec  1 09:46:21 compute-0 nova_compute[189491]: 2025-12-01 09:46:21.171 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:46:21 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:46:21.177 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[308f6ae2-b4a8-43db-832d-6fbe3be81057]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:46:21 compute-0 nova_compute[189491]: 2025-12-01 09:46:21.185 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:46:21 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:46:21.202 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[0e1239f5-d12f-4f01-859b-30e466b9b67d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:46:21 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:46:21.204 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[676f7a56-022d-493c-af7b-9ddbc3a73250]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:46:21 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:46:21.224 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[a41314f8-5366-425a-bbbc-e9a79046f15b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 550194, 'reachable_time': 39415, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 255161, 'error': None, 'target': 'ovnmeta-4f3e9b63-cba6-412e-ba07-d66a8b38af02', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:46:21 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:46:21.227 106797 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4f3e9b63-cba6-412e-ba07-d66a8b38af02 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  1 09:46:21 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:46:21.227 106797 DEBUG oslo.privsep.daemon [-] privsep: reply[a50d7b8d-61e0-4288-b710-1a59d2b3e4ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:46:21 compute-0 systemd[1]: run-netns-ovnmeta\x2d4f3e9b63\x2dcba6\x2d412e\x2dba07\x2dd66a8b38af02.mount: Deactivated successfully.
Dec  1 09:46:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:21.574 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1832 Content-Type: application/json Date: Mon, 01 Dec 2025 09:46:20 GMT Keep-Alive: timeout=5, max=98 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-c75b29d2-2eee-446d-acdc-e98672d98778 x-openstack-request-id: req-c75b29d2-2eee-446d-acdc-e98672d98778 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Dec  1 09:46:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:21.574 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "dc0d510c-4baf-4bcb-ab4f-de6ee48849c0", "name": "te-8664732-asg-zzzrimsgcaeu-gnecnnuukmep-lujrpewlzjs2", "status": "ACTIVE", "tenant_id": "6d5294cc5ac64b22a4a0f770b8d8bc61", "user_id": "c54f3a4a232b4a739be88e97f2094d4f", "metadata": {"metering.server_group": "e03937ad-4d2d-4edc-9b33-ed8d878566ca"}, "hostId": "b9c6fdac1e98b24aca6852a4c44644f8d936ac2e3843f1f4b4c15406", "image": {"id": "280f4e4d-4a12-4164-a687-6106a9afc7fe", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/280f4e4d-4a12-4164-a687-6106a9afc7fe"}]}, "flavor": {"id": "422f041c-a187-4aa2-8167-37f3eb0e89c2", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/422f041c-a187-4aa2-8167-37f3eb0e89c2"}]}, "created": "2025-12-01T09:44:59Z", "updated": "2025-12-01T09:45:07Z", "addresses": {"": [{"version": 4, "addr": "10.100.0.156", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:50:a8:e2"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/dc0d510c-4baf-4bcb-ab4f-de6ee48849c0"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/dc0d510c-4baf-4bcb-ab4f-de6ee48849c0"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-12-01T09:45:07.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "default"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-0000000b", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Dec  1 09:46:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:21.574 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/dc0d510c-4baf-4bcb-ab4f-de6ee48849c0 used request id req-c75b29d2-2eee-446d-acdc-e98672d98778 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Dec  1 09:46:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:21.576 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'dc0d510c-4baf-4bcb-ab4f-de6ee48849c0', 'name': 'te-8664732-asg-zzzrimsgcaeu-gnecnnuukmep-lujrpewlzjs2', 'flavor': {'id': '422f041c-a187-4aa2-8167-37f3eb0e89c2', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '280f4e4d-4a12-4164-a687-6106a9afc7fe'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000b', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '6d5294cc5ac64b22a4a0f770b8d8bc61', 'user_id': 'c54f3a4a232b4a739be88e97f2094d4f', 'hostId': 'b9c6fdac1e98b24aca6852a4c44644f8d936ac2e3843f1f4b4c15406', 'status': 'active', 'metadata': {'metering.server_group': 'e03937ad-4d2d-4edc-9b33-ed8d878566ca'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  1 09:46:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:21.580 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance b6b22803-169f-45be-85f7-058bfa3f2970 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Dec  1 09:46:21 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:21.582 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/b6b22803-169f-45be-85f7-058bfa3f2970 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}5b15b15c247f410e52837a95689cb091041b96c474d34a98b1d5f06140c01501" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Dec  1 09:46:21 compute-0 nova_compute[189491]: 2025-12-01 09:46:21.657 189495 DEBUG nova.compute.manager [req-e8f81fa1-be37-4471-b633-7c9087929419 req-c22262f5-959a-4c76-b1cd-737573aada18 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 4070cce8-ccf0-4909-8358-9924882ce843] Received event network-vif-unplugged-993e74c8-435c-4af8-8267-003c237479c4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 09:46:21 compute-0 nova_compute[189491]: 2025-12-01 09:46:21.657 189495 DEBUG oslo_concurrency.lockutils [req-e8f81fa1-be37-4471-b633-7c9087929419 req-c22262f5-959a-4c76-b1cd-737573aada18 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Acquiring lock "4070cce8-ccf0-4909-8358-9924882ce843-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:46:21 compute-0 nova_compute[189491]: 2025-12-01 09:46:21.657 189495 DEBUG oslo_concurrency.lockutils [req-e8f81fa1-be37-4471-b633-7c9087929419 req-c22262f5-959a-4c76-b1cd-737573aada18 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Lock "4070cce8-ccf0-4909-8358-9924882ce843-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:46:21 compute-0 nova_compute[189491]: 2025-12-01 09:46:21.658 189495 DEBUG oslo_concurrency.lockutils [req-e8f81fa1-be37-4471-b633-7c9087929419 req-c22262f5-959a-4c76-b1cd-737573aada18 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Lock "4070cce8-ccf0-4909-8358-9924882ce843-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:46:21 compute-0 nova_compute[189491]: 2025-12-01 09:46:21.658 189495 DEBUG nova.compute.manager [req-e8f81fa1-be37-4471-b633-7c9087929419 req-c22262f5-959a-4c76-b1cd-737573aada18 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 4070cce8-ccf0-4909-8358-9924882ce843] No waiting events found dispatching network-vif-unplugged-993e74c8-435c-4af8-8267-003c237479c4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 09:46:21 compute-0 nova_compute[189491]: 2025-12-01 09:46:21.658 189495 WARNING nova.compute.manager [req-e8f81fa1-be37-4471-b633-7c9087929419 req-c22262f5-959a-4c76-b1cd-737573aada18 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 4070cce8-ccf0-4909-8358-9924882ce843] Received unexpected event network-vif-unplugged-993e74c8-435c-4af8-8267-003c237479c4 for instance with vm_state deleted and task_state None.#033[00m
Dec  1 09:46:21 compute-0 nova_compute[189491]: 2025-12-01 09:46:21.658 189495 DEBUG nova.compute.manager [req-e8f81fa1-be37-4471-b633-7c9087929419 req-c22262f5-959a-4c76-b1cd-737573aada18 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 4070cce8-ccf0-4909-8358-9924882ce843] Received event network-vif-plugged-993e74c8-435c-4af8-8267-003c237479c4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 09:46:21 compute-0 nova_compute[189491]: 2025-12-01 09:46:21.659 189495 DEBUG oslo_concurrency.lockutils [req-e8f81fa1-be37-4471-b633-7c9087929419 req-c22262f5-959a-4c76-b1cd-737573aada18 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Acquiring lock "4070cce8-ccf0-4909-8358-9924882ce843-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:46:21 compute-0 nova_compute[189491]: 2025-12-01 09:46:21.659 189495 DEBUG oslo_concurrency.lockutils [req-e8f81fa1-be37-4471-b633-7c9087929419 req-c22262f5-959a-4c76-b1cd-737573aada18 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Lock "4070cce8-ccf0-4909-8358-9924882ce843-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:46:21 compute-0 nova_compute[189491]: 2025-12-01 09:46:21.659 189495 DEBUG oslo_concurrency.lockutils [req-e8f81fa1-be37-4471-b633-7c9087929419 req-c22262f5-959a-4c76-b1cd-737573aada18 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Lock "4070cce8-ccf0-4909-8358-9924882ce843-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:46:21 compute-0 nova_compute[189491]: 2025-12-01 09:46:21.659 189495 DEBUG nova.compute.manager [req-e8f81fa1-be37-4471-b633-7c9087929419 req-c22262f5-959a-4c76-b1cd-737573aada18 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 4070cce8-ccf0-4909-8358-9924882ce843] No waiting events found dispatching network-vif-plugged-993e74c8-435c-4af8-8267-003c237479c4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 09:46:21 compute-0 nova_compute[189491]: 2025-12-01 09:46:21.660 189495 WARNING nova.compute.manager [req-e8f81fa1-be37-4471-b633-7c9087929419 req-c22262f5-959a-4c76-b1cd-737573aada18 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 4070cce8-ccf0-4909-8358-9924882ce843] Received unexpected event network-vif-plugged-993e74c8-435c-4af8-8267-003c237479c4 for instance with vm_state deleted and task_state None.#033[00m
Dec  1 09:46:21 compute-0 nova_compute[189491]: 2025-12-01 09:46:21.660 189495 DEBUG nova.compute.manager [req-e8f81fa1-be37-4471-b633-7c9087929419 req-c22262f5-959a-4c76-b1cd-737573aada18 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 4070cce8-ccf0-4909-8358-9924882ce843] Received event network-vif-deleted-993e74c8-435c-4af8-8267-003c237479c4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 09:46:21 compute-0 nova_compute[189491]: 2025-12-01 09:46:21.660 189495 DEBUG nova.compute.manager [req-e8f81fa1-be37-4471-b633-7c9087929419 req-c22262f5-959a-4c76-b1cd-737573aada18 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 70f48496-14bd-4e6f-8706-262d8e6b9510] Received event network-vif-unplugged-9ba63f14-2eaa-45bf-8c16-59bd3a7893de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 09:46:21 compute-0 nova_compute[189491]: 2025-12-01 09:46:21.660 189495 DEBUG oslo_concurrency.lockutils [req-e8f81fa1-be37-4471-b633-7c9087929419 req-c22262f5-959a-4c76-b1cd-737573aada18 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Acquiring lock "70f48496-14bd-4e6f-8706-262d8e6b9510-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:46:21 compute-0 nova_compute[189491]: 2025-12-01 09:46:21.661 189495 DEBUG oslo_concurrency.lockutils [req-e8f81fa1-be37-4471-b633-7c9087929419 req-c22262f5-959a-4c76-b1cd-737573aada18 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Lock "70f48496-14bd-4e6f-8706-262d8e6b9510-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:46:21 compute-0 nova_compute[189491]: 2025-12-01 09:46:21.661 189495 DEBUG oslo_concurrency.lockutils [req-e8f81fa1-be37-4471-b633-7c9087929419 req-c22262f5-959a-4c76-b1cd-737573aada18 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Lock "70f48496-14bd-4e6f-8706-262d8e6b9510-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:46:21 compute-0 nova_compute[189491]: 2025-12-01 09:46:21.661 189495 DEBUG nova.compute.manager [req-e8f81fa1-be37-4471-b633-7c9087929419 req-c22262f5-959a-4c76-b1cd-737573aada18 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 70f48496-14bd-4e6f-8706-262d8e6b9510] No waiting events found dispatching network-vif-unplugged-9ba63f14-2eaa-45bf-8c16-59bd3a7893de pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 09:46:21 compute-0 nova_compute[189491]: 2025-12-01 09:46:21.661 189495 DEBUG nova.compute.manager [req-e8f81fa1-be37-4471-b633-7c9087929419 req-c22262f5-959a-4c76-b1cd-737573aada18 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 70f48496-14bd-4e6f-8706-262d8e6b9510] Received event network-vif-unplugged-9ba63f14-2eaa-45bf-8c16-59bd3a7893de for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  1 09:46:21 compute-0 nova_compute[189491]: 2025-12-01 09:46:21.661 189495 DEBUG nova.compute.manager [req-e8f81fa1-be37-4471-b633-7c9087929419 req-c22262f5-959a-4c76-b1cd-737573aada18 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 70f48496-14bd-4e6f-8706-262d8e6b9510] Received event network-vif-plugged-9ba63f14-2eaa-45bf-8c16-59bd3a7893de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 09:46:21 compute-0 nova_compute[189491]: 2025-12-01 09:46:21.662 189495 DEBUG oslo_concurrency.lockutils [req-e8f81fa1-be37-4471-b633-7c9087929419 req-c22262f5-959a-4c76-b1cd-737573aada18 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Acquiring lock "70f48496-14bd-4e6f-8706-262d8e6b9510-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:46:21 compute-0 nova_compute[189491]: 2025-12-01 09:46:21.662 189495 DEBUG oslo_concurrency.lockutils [req-e8f81fa1-be37-4471-b633-7c9087929419 req-c22262f5-959a-4c76-b1cd-737573aada18 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Lock "70f48496-14bd-4e6f-8706-262d8e6b9510-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:46:21 compute-0 nova_compute[189491]: 2025-12-01 09:46:21.662 189495 DEBUG oslo_concurrency.lockutils [req-e8f81fa1-be37-4471-b633-7c9087929419 req-c22262f5-959a-4c76-b1cd-737573aada18 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Lock "70f48496-14bd-4e6f-8706-262d8e6b9510-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:46:21 compute-0 nova_compute[189491]: 2025-12-01 09:46:21.662 189495 DEBUG nova.compute.manager [req-e8f81fa1-be37-4471-b633-7c9087929419 req-c22262f5-959a-4c76-b1cd-737573aada18 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 70f48496-14bd-4e6f-8706-262d8e6b9510] No waiting events found dispatching network-vif-plugged-9ba63f14-2eaa-45bf-8c16-59bd3a7893de pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 09:46:21 compute-0 nova_compute[189491]: 2025-12-01 09:46:21.663 189495 WARNING nova.compute.manager [req-e8f81fa1-be37-4471-b633-7c9087929419 req-c22262f5-959a-4c76-b1cd-737573aada18 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 70f48496-14bd-4e6f-8706-262d8e6b9510] Received unexpected event network-vif-plugged-9ba63f14-2eaa-45bf-8c16-59bd3a7893de for instance with vm_state active and task_state deleting.#033[00m
Dec  1 09:46:21 compute-0 nova_compute[189491]: 2025-12-01 09:46:21.902 189495 DEBUG nova.network.neutron [-] [instance: 70f48496-14bd-4e6f-8706-262d8e6b9510] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 09:46:21 compute-0 nova_compute[189491]: 2025-12-01 09:46:21.918 189495 INFO nova.compute.manager [-] [instance: 70f48496-14bd-4e6f-8706-262d8e6b9510] Took 0.83 seconds to deallocate network for instance.#033[00m
Dec  1 09:46:21 compute-0 nova_compute[189491]: 2025-12-01 09:46:21.952 189495 DEBUG oslo_concurrency.lockutils [None req-1ecdd1ef-1b48-45d9-9b8d-0d2c702701b6 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:46:21 compute-0 nova_compute[189491]: 2025-12-01 09:46:21.953 189495 DEBUG oslo_concurrency.lockutils [None req-1ecdd1ef-1b48-45d9-9b8d-0d2c702701b6 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:46:22 compute-0 nova_compute[189491]: 2025-12-01 09:46:22.035 189495 DEBUG nova.compute.provider_tree [None req-1ecdd1ef-1b48-45d9-9b8d-0d2c702701b6 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Inventory has not changed in ProviderTree for provider: 143c7fe7-af1f-477a-978c-6a994d785d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 09:46:22 compute-0 nova_compute[189491]: 2025-12-01 09:46:22.057 189495 DEBUG nova.scheduler.client.report [None req-1ecdd1ef-1b48-45d9-9b8d-0d2c702701b6 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Inventory has not changed for provider 143c7fe7-af1f-477a-978c-6a994d785d98 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 09:46:22 compute-0 nova_compute[189491]: 2025-12-01 09:46:22.075 189495 DEBUG oslo_concurrency.lockutils [None req-1ecdd1ef-1b48-45d9-9b8d-0d2c702701b6 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.123s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:46:22 compute-0 nova_compute[189491]: 2025-12-01 09:46:22.107 189495 INFO nova.scheduler.client.report [None req-1ecdd1ef-1b48-45d9-9b8d-0d2c702701b6 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Deleted allocations for instance 70f48496-14bd-4e6f-8706-262d8e6b9510#033[00m
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.160 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 2083 Content-Type: application/json Date: Mon, 01 Dec 2025 09:46:21 GMT Keep-Alive: timeout=5, max=97 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-33ff2c14-0dd2-48ab-8fa9-02ede172b8e1 x-openstack-request-id: req-33ff2c14-0dd2-48ab-8fa9-02ede172b8e1 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.161 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "b6b22803-169f-45be-85f7-058bfa3f2970", "name": "tempest-TestServerBasicOps-server-1504290779", "status": "ACTIVE", "tenant_id": "db1d07a763fd4c1d806a7cf648ffae54", "user_id": "b40ddefd6a0e437e95ddb1bc36d5ec0b", "metadata": {"meta1": "data1", "meta2": "data2", "metaN": "dataN"}, "hostId": "5fd17b731c7e9b265b942f880ae8db441b6d308f225a124b81f699d1", "image": {"id": "7ddeffd1-d06f-4a46-9e41-114974daa90e", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/7ddeffd1-d06f-4a46-9e41-114974daa90e"}]}, "flavor": {"id": "422f041c-a187-4aa2-8167-37f3eb0e89c2", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/422f041c-a187-4aa2-8167-37f3eb0e89c2"}]}, "created": "2025-12-01T09:45:37Z", "updated": "2025-12-01T09:45:45Z", "addresses": {"tempest-TestServerBasicOps-201869635-network": [{"version": 4, "addr": "10.100.0.9", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:af:65:c9"}, {"version": 4, "addr": "192.168.122.191", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:af:65:c9"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/b6b22803-169f-45be-85f7-058bfa3f2970"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/b6b22803-169f-45be-85f7-058bfa3f2970"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": "tempest-TestServerBasicOps-1010317755", "OS-SRV-USG:launched_at": "2025-12-01T09:45:45.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "tempest-securitygroup--346129271"}, {"name": "tempest-secgroup-smoke-1778217019"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-0000000d", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.161 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/b6b22803-169f-45be-85f7-058bfa3f2970 used request id req-33ff2c14-0dd2-48ab-8fa9-02ede172b8e1 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.163 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b6b22803-169f-45be-85f7-058bfa3f2970', 'name': 'tempest-TestServerBasicOps-server-1504290779', 'flavor': {'id': '422f041c-a187-4aa2-8167-37f3eb0e89c2', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '7ddeffd1-d06f-4a46-9e41-114974daa90e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000d', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'db1d07a763fd4c1d806a7cf648ffae54', 'user_id': 'b40ddefd6a0e437e95ddb1bc36d5ec0b', 'hostId': '5fd17b731c7e9b265b942f880ae8db441b6d308f225a124b81f699d1', 'status': 'active', 'metadata': {'meta1': 'data1', 'meta2': 'data2', 'metaN': 'dataN'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.163 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.164 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84dc55100>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.164 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84dc55100>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.165 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: libvirt: QEMU Driver error : Domain not found: no domain with matching uuid '70f48496-14bd-4e6f-8706-262d8e6b9510'
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.167 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-01T09:46:22.164947) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.168 14 DEBUG ceilometer.compute.pollsters [-] Exception while getting samples Error from libvirt while looking up instance <name=instance-00000009, id=70f48496-14bd-4e6f-8706-262d8e6b9510>: [Error Code 42] Domain not found: no domain with matching uuid '70f48496-14bd-4e6f-8706-262d8e6b9510' get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:149
Dec  1 09:46:22 compute-0 nova_compute[189491]: 2025-12-01 09:46:22.173 189495 DEBUG oslo_concurrency.lockutils [None req-1ecdd1ef-1b48-45d9-9b8d-0d2c702701b6 3f19699d7cb4493292a31daef496a1c2 ee60ff0d117e468aa42c7d39022568ea - - default default] Lock "70f48496-14bd-4e6f-8706-262d8e6b9510" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.479s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.189 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.device.allocation volume: 30154752 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.190 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.209 14 DEBUG ceilometer.compute.pollsters [-] b6b22803-169f-45be-85f7-058bfa3f2970/disk.device.allocation volume: 30351360 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.210 14 DEBUG ceilometer.compute.pollsters [-] b6b22803-169f-45be-85f7-058bfa3f2970/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.210 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.211 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7ff84c98b110>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.211 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.212 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b140>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.212 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b140>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.213 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.213 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-01T09:46:22.213005) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: libvirt: QEMU Driver error : Domain not found: no domain with matching uuid '70f48496-14bd-4e6f-8706-262d8e6b9510'
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.214 14 DEBUG ceilometer.compute.pollsters [-] Exception while getting samples Error from libvirt while looking up instance <name=instance-00000009, id=70f48496-14bd-4e6f-8706-262d8e6b9510>: [Error Code 42] Domain not found: no domain with matching uuid '70f48496-14bd-4e6f-8706-262d8e6b9510' get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:149
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.255 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.device.read.latency volume: 537631881 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.256 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.device.read.latency volume: 54970899 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:46:22 compute-0 nova_compute[189491]: 2025-12-01 09:46:22.271 189495 DEBUG nova.compute.manager [req-1b421d08-832b-4a71-ae68-75b8553787b9 req-62dbe172-46ba-4eb4-ba6f-e5eccabf878f ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: 70f48496-14bd-4e6f-8706-262d8e6b9510] Received event network-vif-deleted-9ba63f14-2eaa-45bf-8c16-59bd3a7893de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.297 14 DEBUG ceilometer.compute.pollsters [-] b6b22803-169f-45be-85f7-058bfa3f2970/disk.device.read.latency volume: 479748111 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.298 14 DEBUG ceilometer.compute.pollsters [-] b6b22803-169f-45be-85f7-058bfa3f2970/disk.device.read.latency volume: 101131281 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.299 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.299 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7ff84c98b1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.300 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.300 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.301 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.301 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.302 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-01T09:46:22.301438) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: libvirt: QEMU Driver error : Domain not found: no domain with matching uuid '70f48496-14bd-4e6f-8706-262d8e6b9510'
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.303 14 DEBUG ceilometer.compute.pollsters [-] Exception while getting samples Error from libvirt while looking up instance <name=instance-00000009, id=70f48496-14bd-4e6f-8706-262d8e6b9510>: [Error Code 42] Domain not found: no domain with matching uuid '70f48496-14bd-4e6f-8706-262d8e6b9510' get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:149
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.303 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.device.usage volume: 29884416 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.304 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.305 14 DEBUG ceilometer.compute.pollsters [-] b6b22803-169f-45be-85f7-058bfa3f2970/disk.device.usage volume: 29753344 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.305 14 DEBUG ceilometer.compute.pollsters [-] b6b22803-169f-45be-85f7-058bfa3f2970/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.306 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.307 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7ff84c98b200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.307 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.307 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.308 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.308 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.309 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-01T09:46:22.308702) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: libvirt: QEMU Driver error : Domain not found: no domain with matching uuid '70f48496-14bd-4e6f-8706-262d8e6b9510'
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.310 14 DEBUG ceilometer.compute.pollsters [-] Exception while getting samples Error from libvirt while looking up instance <name=instance-00000009, id=70f48496-14bd-4e6f-8706-262d8e6b9510>: [Error Code 42] Domain not found: no domain with matching uuid '70f48496-14bd-4e6f-8706-262d8e6b9510' get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:149
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.310 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.device.write.bytes volume: 72835072 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.311 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.312 14 DEBUG ceilometer.compute.pollsters [-] b6b22803-169f-45be-85f7-058bfa3f2970/disk.device.write.bytes volume: 72761344 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.312 14 DEBUG ceilometer.compute.pollsters [-] b6b22803-169f-45be-85f7-058bfa3f2970/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.313 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.313 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7ff84ca1c230>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.314 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.314 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84ca1c260>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.314 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84ca1c260>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.315 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: libvirt: QEMU Driver error : Domain not found: no domain with matching uuid '70f48496-14bd-4e6f-8706-262d8e6b9510'
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.316 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-01T09:46:22.314925) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.316 14 DEBUG ceilometer.compute.pollsters [-] Exception while getting samples Error from libvirt while looking up instance <name=instance-00000009, id=70f48496-14bd-4e6f-8706-262d8e6b9510>: [Error Code 42] Domain not found: no domain with matching uuid '70f48496-14bd-4e6f-8706-262d8e6b9510' get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:149
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.338 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:46:22 compute-0 ovn_controller[97794]: 2025-12-01T09:46:22Z|00023|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:af:65:c9 10.100.0.9
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.361 14 DEBUG ceilometer.compute.pollsters [-] b6b22803-169f-45be-85f7-058bfa3f2970/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.362 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.363 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7ff84c98b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.363 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  1 09:46:22 compute-0 ovn_controller[97794]: 2025-12-01T09:46:22Z|00024|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:af:65:c9 10.100.0.9
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.363 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.364 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.365 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.366 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-01T09:46:22.365234) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: libvirt: QEMU Driver error : Domain not found: no domain with matching uuid '70f48496-14bd-4e6f-8706-262d8e6b9510'
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.367 14 DEBUG ceilometer.compute.pollsters [-] Exception while getting samples Error from libvirt while looking up instance <name=instance-00000009, id=70f48496-14bd-4e6f-8706-262d8e6b9510>: [Error Code 42] Domain not found: no domain with matching uuid '70f48496-14bd-4e6f-8706-262d8e6b9510' get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:149
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.367 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.device.write.latency volume: 3008386139 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.368 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.368 14 DEBUG ceilometer.compute.pollsters [-] b6b22803-169f-45be-85f7-058bfa3f2970/disk.device.write.latency volume: 3601528871 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.369 14 DEBUG ceilometer.compute.pollsters [-] b6b22803-169f-45be-85f7-058bfa3f2970/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.369 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.370 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7ff84c98b2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.370 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.370 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.371 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.371 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: libvirt: QEMU Driver error : Domain not found: no domain with matching uuid '70f48496-14bd-4e6f-8706-262d8e6b9510'
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.372 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-01T09:46:22.371670) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.373 14 DEBUG ceilometer.compute.pollsters [-] Exception while getting samples Error from libvirt while looking up instance <name=instance-00000009, id=70f48496-14bd-4e6f-8706-262d8e6b9510>: [Error Code 42] Domain not found: no domain with matching uuid '70f48496-14bd-4e6f-8706-262d8e6b9510' get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:149
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.373 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.device.write.requests volume: 304 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.374 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.375 14 DEBUG ceilometer.compute.pollsters [-] b6b22803-169f-45be-85f7-058bfa3f2970/disk.device.write.requests volume: 312 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.375 14 DEBUG ceilometer.compute.pollsters [-] b6b22803-169f-45be-85f7-058bfa3f2970/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.376 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.376 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7ff84c98b620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.376 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.377 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84ff01af0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.377 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84ff01af0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.378 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: libvirt: QEMU Driver error : Domain not found: no domain with matching uuid '70f48496-14bd-4e6f-8706-262d8e6b9510'
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.379 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-01T09:46:22.377931) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.379 14 DEBUG ceilometer.compute.pollsters [-] Exception while getting samples Error from libvirt while looking up instance <name=instance-00000009, id=70f48496-14bd-4e6f-8706-262d8e6b9510>: [Error Code 42] Domain not found: no domain with matching uuid '70f48496-14bd-4e6f-8706-262d8e6b9510' get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:149
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.383 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for dc0d510c-4baf-4bcb-ab4f-de6ee48849c0 / tape1536dee-e9 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.383 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/network.incoming.bytes volume: 1352 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.387 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for b6b22803-169f-45be-85f7-058bfa3f2970 / tap05122117-05 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.387 14 DEBUG ceilometer.compute.pollsters [-] b6b22803-169f-45be-85f7-058bfa3f2970/network.incoming.bytes volume: 1796 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.388 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.388 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7ff84c98b680>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.389 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.389 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bb30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.389 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bb30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.390 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.390 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.391 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: tempest-TestNetworkBasicOps-server-943973460>, <NovaLikeServer: te-8664732-asg-zzzrimsgcaeu-gnecnnuukmep-lujrpewlzjs2>, <NovaLikeServer: tempest-TestServerBasicOps-server-1504290779>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: tempest-TestNetworkBasicOps-server-943973460>, <NovaLikeServer: te-8664732-asg-zzzrimsgcaeu-gnecnnuukmep-lujrpewlzjs2>, <NovaLikeServer: tempest-TestServerBasicOps-server-1504290779>]
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.391 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-12-01T09:46:22.390264) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.391 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7ff84c98b320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.392 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.392 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.392 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.393 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.394 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.394 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7ff84c98b920>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.394 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.395 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bb60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.395 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bb60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.395 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.396 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-01T09:46:22.393111) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: libvirt: QEMU Driver error : Domain not found: no domain with matching uuid '70f48496-14bd-4e6f-8706-262d8e6b9510'
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.397 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-01T09:46:22.395747) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.397 14 DEBUG ceilometer.compute.pollsters [-] Exception while getting samples Error from libvirt while looking up instance <name=instance-00000009, id=70f48496-14bd-4e6f-8706-262d8e6b9510>: [Error Code 42] Domain not found: no domain with matching uuid '70f48496-14bd-4e6f-8706-262d8e6b9510' get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:149
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.397 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/network.incoming.packets volume: 9 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.398 14 DEBUG ceilometer.compute.pollsters [-] b6b22803-169f-45be-85f7-058bfa3f2970/network.incoming.packets volume: 15 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.399 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.399 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7ff84c98b380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.399 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.399 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b3b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.399 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b3b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.400 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.401 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.401 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7ff84c98bb90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.401 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.401 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-01T09:46:22.400316) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.402 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bbc0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.402 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bbc0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.402 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.403 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-01T09:46:22.402895) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: libvirt: QEMU Driver error : Domain not found: no domain with matching uuid '70f48496-14bd-4e6f-8706-262d8e6b9510'
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.404 14 DEBUG ceilometer.compute.pollsters [-] Exception while getting samples Error from libvirt while looking up instance <name=instance-00000009, id=70f48496-14bd-4e6f-8706-262d8e6b9510>: [Error Code 42] Domain not found: no domain with matching uuid '70f48496-14bd-4e6f-8706-262d8e6b9510' get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:149
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.404 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.405 14 DEBUG ceilometer.compute.pollsters [-] b6b22803-169f-45be-85f7-058bfa3f2970/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.406 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.406 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7ff84c98bbf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.406 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.406 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bc20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.407 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bc20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.407 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.407 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-01T09:46:22.407348) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: libvirt: QEMU Driver error : Domain not found: no domain with matching uuid '70f48496-14bd-4e6f-8706-262d8e6b9510'
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.408 14 DEBUG ceilometer.compute.pollsters [-] Exception while getting samples Error from libvirt while looking up instance <name=instance-00000009, id=70f48496-14bd-4e6f-8706-262d8e6b9510>: [Error Code 42] Domain not found: no domain with matching uuid '70f48496-14bd-4e6f-8706-262d8e6b9510' get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:149
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.409 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.409 14 DEBUG ceilometer.compute.pollsters [-] b6b22803-169f-45be-85f7-058bfa3f2970/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.410 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.410 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7ff84c98bc80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.410 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.411 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bcb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.411 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bcb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.411 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.412 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-01T09:46:22.411738) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: libvirt: QEMU Driver error : Domain not found: no domain with matching uuid '70f48496-14bd-4e6f-8706-262d8e6b9510'
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.413 14 DEBUG ceilometer.compute.pollsters [-] Exception while getting samples Error from libvirt while looking up instance <name=instance-00000009, id=70f48496-14bd-4e6f-8706-262d8e6b9510>: [Error Code 42] Domain not found: no domain with matching uuid '70f48496-14bd-4e6f-8706-262d8e6b9510' get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:149
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.413 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.414 14 DEBUG ceilometer.compute.pollsters [-] b6b22803-169f-45be-85f7-058bfa3f2970/network.outgoing.bytes volume: 992 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.415 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.415 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7ff84c98bd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.415 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.415 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bd40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.416 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bd40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.416 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.417 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-01T09:46:22.416498) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: libvirt: QEMU Driver error : Domain not found: no domain with matching uuid '70f48496-14bd-4e6f-8706-262d8e6b9510'
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.417 14 DEBUG ceilometer.compute.pollsters [-] Exception while getting samples Error from libvirt while looking up instance <name=instance-00000009, id=70f48496-14bd-4e6f-8706-262d8e6b9510>: [Error Code 42] Domain not found: no domain with matching uuid '70f48496-14bd-4e6f-8706-262d8e6b9510' get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:149
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.418 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.418 14 DEBUG ceilometer.compute.pollsters [-] b6b22803-169f-45be-85f7-058bfa3f2970/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.419 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.419 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7ff84c98bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.419 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.420 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bdd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.420 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bdd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.421 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.421 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.421 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: tempest-TestNetworkBasicOps-server-943973460>, <NovaLikeServer: te-8664732-asg-zzzrimsgcaeu-gnecnnuukmep-lujrpewlzjs2>, <NovaLikeServer: tempest-TestServerBasicOps-server-1504290779>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: tempest-TestNetworkBasicOps-server-943973460>, <NovaLikeServer: te-8664732-asg-zzzrimsgcaeu-gnecnnuukmep-lujrpewlzjs2>, <NovaLikeServer: tempest-TestServerBasicOps-server-1504290779>]
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.422 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7ff84c98b5c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.423 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-12-01T09:46:22.421113) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.423 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.423 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b5f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.424 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b5f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.425 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: libvirt: QEMU Driver error : Domain not found: no domain with matching uuid '70f48496-14bd-4e6f-8706-262d8e6b9510'
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.426 14 DEBUG ceilometer.compute.pollsters [-] Exception while getting samples Error from libvirt while looking up instance <name=instance-00000009, id=70f48496-14bd-4e6f-8706-262d8e6b9510>: [Error Code 42] Domain not found: no domain with matching uuid '70f48496-14bd-4e6f-8706-262d8e6b9510' get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:149
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.426 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/memory.usage volume: 43.40625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.427 14 DEBUG ceilometer.compute.pollsters [-] b6b22803-169f-45be-85f7-058bfa3f2970/memory.usage volume: 40.4765625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.427 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.427 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7ff84dc55040>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.428 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.428 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b650>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.429 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b650>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.429 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: libvirt: QEMU Driver error : Domain not found: no domain with matching uuid '70f48496-14bd-4e6f-8706-262d8e6b9510'
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.431 14 DEBUG ceilometer.compute.pollsters [-] Exception while getting samples Error from libvirt while looking up instance <name=instance-00000009, id=70f48496-14bd-4e6f-8706-262d8e6b9510>: [Error Code 42] Domain not found: no domain with matching uuid '70f48496-14bd-4e6f-8706-262d8e6b9510' get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:149
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.431 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.431 14 DEBUG ceilometer.compute.pollsters [-] b6b22803-169f-45be-85f7-058bfa3f2970/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.432 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.432 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7ff84c98be30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.433 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.432 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-01T09:46:22.424778) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.433 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-01T09:46:22.429281) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.433 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98be60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.433 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98be60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.434 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.435 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-01T09:46:22.434404) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: libvirt: QEMU Driver error : Domain not found: no domain with matching uuid '70f48496-14bd-4e6f-8706-262d8e6b9510'
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.437 14 DEBUG ceilometer.compute.pollsters [-] Exception while getting samples Error from libvirt while looking up instance <name=instance-00000009, id=70f48496-14bd-4e6f-8706-262d8e6b9510>: [Error Code 42] Domain not found: no domain with matching uuid '70f48496-14bd-4e6f-8706-262d8e6b9510' get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:149
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.438 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.438 14 DEBUG ceilometer.compute.pollsters [-] b6b22803-169f-45be-85f7-058bfa3f2970/network.outgoing.packets volume: 6 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.439 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.439 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7ff8503b1b80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.439 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.440 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84f216690>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.440 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84f216690>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.440 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: libvirt: QEMU Driver error : Domain not found: no domain with matching uuid '70f48496-14bd-4e6f-8706-262d8e6b9510'
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.442 14 DEBUG ceilometer.compute.pollsters [-] Exception while getting samples Error from libvirt while looking up instance <name=instance-00000009, id=70f48496-14bd-4e6f-8706-262d8e6b9510>: [Error Code 42] Domain not found: no domain with matching uuid '70f48496-14bd-4e6f-8706-262d8e6b9510' get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:149
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.442 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/cpu volume: 73400000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.442 14 DEBUG ceilometer.compute.pollsters [-] b6b22803-169f-45be-85f7-058bfa3f2970/cpu volume: 34550000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.443 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.443 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7ff84dab3f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.444 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.444 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c9896d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.444 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c9896d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.444 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.444 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-01T09:46:22.440607) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.444 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-01T09:46:22.444578) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: libvirt: QEMU Driver error : Domain not found: no domain with matching uuid '70f48496-14bd-4e6f-8706-262d8e6b9510'
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.446 14 DEBUG ceilometer.compute.pollsters [-] Exception while getting samples Error from libvirt while looking up instance <name=instance-00000009, id=70f48496-14bd-4e6f-8706-262d8e6b9510>: [Error Code 42] Domain not found: no domain with matching uuid '70f48496-14bd-4e6f-8706-262d8e6b9510' get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:149
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.446 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.446 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.447 14 DEBUG ceilometer.compute.pollsters [-] b6b22803-169f-45be-85f7-058bfa3f2970/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.447 14 DEBUG ceilometer.compute.pollsters [-] b6b22803-169f-45be-85f7-058bfa3f2970/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.448 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.448 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7ff84c98bec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.448 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.449 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bef0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.449 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bef0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.449 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: libvirt: QEMU Driver error : Domain not found: no domain with matching uuid '70f48496-14bd-4e6f-8706-262d8e6b9510'
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.450 14 DEBUG ceilometer.compute.pollsters [-] Exception while getting samples Error from libvirt while looking up instance <name=instance-00000009, id=70f48496-14bd-4e6f-8706-262d8e6b9510>: [Error Code 42] Domain not found: no domain with matching uuid '70f48496-14bd-4e6f-8706-262d8e6b9510' get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:149
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.451 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.451 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-01T09:46:22.449573) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.451 14 DEBUG ceilometer.compute.pollsters [-] b6b22803-169f-45be-85f7-058bfa3f2970/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.452 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.452 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7ff84c98b170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.453 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.453 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98a720>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.453 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98a720>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.453 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.454 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-01T09:46:22.453700) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: libvirt: QEMU Driver error : Domain not found: no domain with matching uuid '70f48496-14bd-4e6f-8706-262d8e6b9510'
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.455 14 DEBUG ceilometer.compute.pollsters [-] Exception while getting samples Error from libvirt while looking up instance <name=instance-00000009, id=70f48496-14bd-4e6f-8706-262d8e6b9510>: [Error Code 42] Domain not found: no domain with matching uuid '70f48496-14bd-4e6f-8706-262d8e6b9510' get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:149
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.455 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.device.read.requests volume: 1094 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.455 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.456 14 DEBUG ceilometer.compute.pollsters [-] b6b22803-169f-45be-85f7-058bfa3f2970/disk.device.read.requests volume: 1091 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.456 14 DEBUG ceilometer.compute.pollsters [-] b6b22803-169f-45be-85f7-058bfa3f2970/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.457 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.457 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7ff84c98bf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.457 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.458 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bf80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.458 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bf80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.458 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.459 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-01T09:46:22.458772) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: libvirt: QEMU Driver error : Domain not found: no domain with matching uuid '70f48496-14bd-4e6f-8706-262d8e6b9510'
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.460 14 DEBUG ceilometer.compute.pollsters [-] Exception while getting samples Error from libvirt while looking up instance <name=instance-00000009, id=70f48496-14bd-4e6f-8706-262d8e6b9510>: [Error Code 42] Domain not found: no domain with matching uuid '70f48496-14bd-4e6f-8706-262d8e6b9510' get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:149
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.460 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.460 14 DEBUG ceilometer.compute.pollsters [-] b6b22803-169f-45be-85f7-058bfa3f2970/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.461 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.462 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.462 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.462 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.462 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.462 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.462 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.462 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.462 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.462 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.462 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.462 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.462 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.462 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.462 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.463 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.463 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.463 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.463 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.463 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.463 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.463 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.463 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.463 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.463 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.463 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:46:22 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:46:22.463 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:46:23 compute-0 nova_compute[189491]: 2025-12-01 09:46:23.206 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:46:25 compute-0 ovn_controller[97794]: 2025-12-01T09:46:25Z|00165|binding|INFO|Releasing lport 6265634a-8973-4de4-bd20-6e57721ad464 from this chassis (sb_readonly=0)
Dec  1 09:46:25 compute-0 ovn_controller[97794]: 2025-12-01T09:46:25Z|00166|binding|INFO|Releasing lport 7159c06b-520e-4157-9235-0b4ddbac66cf from this chassis (sb_readonly=0)
Dec  1 09:46:25 compute-0 nova_compute[189491]: 2025-12-01 09:46:25.197 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:46:25 compute-0 ovn_controller[97794]: 2025-12-01T09:46:25Z|00167|binding|INFO|Releasing lport 6265634a-8973-4de4-bd20-6e57721ad464 from this chassis (sb_readonly=0)
Dec  1 09:46:25 compute-0 ovn_controller[97794]: 2025-12-01T09:46:25Z|00168|binding|INFO|Releasing lport 7159c06b-520e-4157-9235-0b4ddbac66cf from this chassis (sb_readonly=0)
Dec  1 09:46:25 compute-0 nova_compute[189491]: 2025-12-01 09:46:25.262 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:46:25 compute-0 podman[255162]: 2025-12-01 09:46:25.716268345 +0000 UTC m=+0.080720253 container health_status 6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  1 09:46:25 compute-0 podman[255163]: 2025-12-01 09:46:25.741916541 +0000 UTC m=+0.111257327 container health_status ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec  1 09:46:25 compute-0 nova_compute[189491]: 2025-12-01 09:46:25.992 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:46:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:46:26.537 106659 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:46:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:46:26.538 106659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:46:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:46:26.539 106659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:46:28 compute-0 nova_compute[189491]: 2025-12-01 09:46:28.209 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:46:29 compute-0 podman[203700]: time="2025-12-01T09:46:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 09:46:29 compute-0 podman[203700]: @ - - [01/Dec/2025:09:46:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 30758 "" "Go-http-client/1.1"
Dec  1 09:46:29 compute-0 podman[203700]: @ - - [01/Dec/2025:09:46:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5269 "" "Go-http-client/1.1"
Dec  1 09:46:29 compute-0 nova_compute[189491]: 2025-12-01 09:46:29.910 189495 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764582374.7376642, 7535b6dd-3ef8-4847-812d-f0a9208df287 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 09:46:29 compute-0 nova_compute[189491]: 2025-12-01 09:46:29.911 189495 INFO nova.compute.manager [-] [instance: 7535b6dd-3ef8-4847-812d-f0a9208df287] VM Stopped (Lifecycle Event)#033[00m
Dec  1 09:46:29 compute-0 nova_compute[189491]: 2025-12-01 09:46:29.935 189495 DEBUG nova.compute.manager [None req-7efed6fc-9e7b-42f2-ad70-15ab3d7cc781 - - - - - -] [instance: 7535b6dd-3ef8-4847-812d-f0a9208df287] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 09:46:30 compute-0 nova_compute[189491]: 2025-12-01 09:46:30.994 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:46:31 compute-0 openstack_network_exporter[205866]: ERROR   09:46:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:46:31 compute-0 openstack_network_exporter[205866]: ERROR   09:46:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:46:31 compute-0 openstack_network_exporter[205866]: ERROR   09:46:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 09:46:31 compute-0 openstack_network_exporter[205866]: ERROR   09:46:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 09:46:31 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:46:31 compute-0 openstack_network_exporter[205866]: ERROR   09:46:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 09:46:31 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:46:32 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:46:32.125 106659 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:2b:76', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'f6:fe:a3:90:0a:20'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 09:46:32 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:46:32.126 106659 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  1 09:46:32 compute-0 nova_compute[189491]: 2025-12-01 09:46:32.128 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:46:33 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:46:33.128 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=203a4433-d8f4-4d80-8084-548a6d57cd5d, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:46:33 compute-0 nova_compute[189491]: 2025-12-01 09:46:33.212 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:46:33 compute-0 podman[255206]: 2025-12-01 09:46:33.806699124 +0000 UTC m=+0.155801015 container health_status e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 09:46:33 compute-0 podman[255227]: 2025-12-01 09:46:33.905058857 +0000 UTC m=+0.080191720 container health_status dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  1 09:46:33 compute-0 podman[255228]: 2025-12-01 09:46:33.936392871 +0000 UTC m=+0.117273215 container health_status f2d8639519b361376780abb83cb5a5e341c4f412720fb5852b2a4d261a69c359 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, io.openshift.expose-services=, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., architecture=x86_64, container_name=kepler, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., version=9.4, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, io.openshift.tags=base rhel9)
Dec  1 09:46:34 compute-0 nova_compute[189491]: 2025-12-01 09:46:34.970 189495 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764582379.9671674, 4070cce8-ccf0-4909-8358-9924882ce843 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 09:46:34 compute-0 nova_compute[189491]: 2025-12-01 09:46:34.971 189495 INFO nova.compute.manager [-] [instance: 4070cce8-ccf0-4909-8358-9924882ce843] VM Stopped (Lifecycle Event)#033[00m
Dec  1 09:46:34 compute-0 nova_compute[189491]: 2025-12-01 09:46:34.991 189495 DEBUG nova.compute.manager [None req-76449897-06ad-4d45-a58d-f2f5f2ab172b - - - - - -] [instance: 4070cce8-ccf0-4909-8358-9924882ce843] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 09:46:35 compute-0 nova_compute[189491]: 2025-12-01 09:46:35.965 189495 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764582380.9637635, 70f48496-14bd-4e6f-8706-262d8e6b9510 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 09:46:35 compute-0 nova_compute[189491]: 2025-12-01 09:46:35.966 189495 INFO nova.compute.manager [-] [instance: 70f48496-14bd-4e6f-8706-262d8e6b9510] VM Stopped (Lifecycle Event)#033[00m
Dec  1 09:46:35 compute-0 nova_compute[189491]: 2025-12-01 09:46:35.984 189495 DEBUG nova.compute.manager [None req-11a1655e-d57e-4611-997c-6f10dcac2b33 - - - - - -] [instance: 70f48496-14bd-4e6f-8706-262d8e6b9510] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 09:46:35 compute-0 nova_compute[189491]: 2025-12-01 09:46:35.997 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:46:38 compute-0 nova_compute[189491]: 2025-12-01 09:46:38.213 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:46:39 compute-0 nova_compute[189491]: 2025-12-01 09:46:39.713 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:46:39 compute-0 nova_compute[189491]: 2025-12-01 09:46:39.715 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 09:46:39 compute-0 nova_compute[189491]: 2025-12-01 09:46:39.925 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "refresh_cache-dc0d510c-4baf-4bcb-ab4f-de6ee48849c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 09:46:39 compute-0 nova_compute[189491]: 2025-12-01 09:46:39.927 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquired lock "refresh_cache-dc0d510c-4baf-4bcb-ab4f-de6ee48849c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 09:46:39 compute-0 nova_compute[189491]: 2025-12-01 09:46:39.928 189495 DEBUG nova.network.neutron [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] [instance: dc0d510c-4baf-4bcb-ab4f-de6ee48849c0] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  1 09:46:41 compute-0 nova_compute[189491]: 2025-12-01 09:46:41.000 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:46:41 compute-0 podman[255273]: 2025-12-01 09:46:41.695409048 +0000 UTC m=+0.066792372 container health_status f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec  1 09:46:41 compute-0 podman[255272]: 2025-12-01 09:46:41.707430191 +0000 UTC m=+0.078593460 container health_status 110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, release=1755695350, io.openshift.tags=minimal rhel9, distribution-scope=public, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, vendor=Red Hat, Inc., config_id=edpm, architecture=x86_64, container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Dec  1 09:46:42 compute-0 nova_compute[189491]: 2025-12-01 09:46:42.153 189495 DEBUG nova.network.neutron [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] [instance: dc0d510c-4baf-4bcb-ab4f-de6ee48849c0] Updating instance_info_cache with network_info: [{"id": "e1536dee-e9fa-499f-9e7a-2b2a0ecce586", "address": "fa:16:3e:50:a8:e2", "network": {"id": "cf0577af-a5ed-496f-aa24-ae4d86898e85", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.156", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d5294cc5ac64b22a4a0f770b8d8bc61", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1536dee-e9", "ovs_interfaceid": "e1536dee-e9fa-499f-9e7a-2b2a0ecce586", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 09:46:42 compute-0 nova_compute[189491]: 2025-12-01 09:46:42.170 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Releasing lock "refresh_cache-dc0d510c-4baf-4bcb-ab4f-de6ee48849c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 09:46:42 compute-0 nova_compute[189491]: 2025-12-01 09:46:42.171 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] [instance: dc0d510c-4baf-4bcb-ab4f-de6ee48849c0] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  1 09:46:43 compute-0 nova_compute[189491]: 2025-12-01 09:46:43.216 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:46:44 compute-0 podman[255319]: 2025-12-01 09:46:44.718935829 +0000 UTC m=+0.087982290 container health_status 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  1 09:46:44 compute-0 podman[255320]: 2025-12-01 09:46:44.75334069 +0000 UTC m=+0.107163379 container health_status 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, container_name=ovn_controller, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  1 09:46:46 compute-0 nova_compute[189491]: 2025-12-01 09:46:46.004 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:46:47 compute-0 nova_compute[189491]: 2025-12-01 09:46:47.714 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:46:47 compute-0 nova_compute[189491]: 2025-12-01 09:46:47.715 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:46:47 compute-0 nova_compute[189491]: 2025-12-01 09:46:47.743 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:46:47 compute-0 nova_compute[189491]: 2025-12-01 09:46:47.745 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:46:47 compute-0 nova_compute[189491]: 2025-12-01 09:46:47.746 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:46:47 compute-0 nova_compute[189491]: 2025-12-01 09:46:47.747 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 09:46:47 compute-0 nova_compute[189491]: 2025-12-01 09:46:47.860 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:46:47 compute-0 nova_compute[189491]: 2025-12-01 09:46:47.959 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:46:47 compute-0 nova_compute[189491]: 2025-12-01 09:46:47.961 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:46:48 compute-0 nova_compute[189491]: 2025-12-01 09:46:48.027 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:46:48 compute-0 nova_compute[189491]: 2025-12-01 09:46:48.040 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b22803-169f-45be-85f7-058bfa3f2970/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:46:48 compute-0 nova_compute[189491]: 2025-12-01 09:46:48.126 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b22803-169f-45be-85f7-058bfa3f2970/disk --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:46:48 compute-0 nova_compute[189491]: 2025-12-01 09:46:48.127 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b22803-169f-45be-85f7-058bfa3f2970/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:46:48 compute-0 nova_compute[189491]: 2025-12-01 09:46:48.189 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b22803-169f-45be-85f7-058bfa3f2970/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:46:48 compute-0 nova_compute[189491]: 2025-12-01 09:46:48.218 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:46:48 compute-0 nova_compute[189491]: 2025-12-01 09:46:48.564 189495 WARNING nova.virt.libvirt.driver [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 09:46:48 compute-0 nova_compute[189491]: 2025-12-01 09:46:48.565 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5007MB free_disk=72.24815368652344GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 09:46:48 compute-0 nova_compute[189491]: 2025-12-01 09:46:48.566 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:46:48 compute-0 nova_compute[189491]: 2025-12-01 09:46:48.566 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:46:48 compute-0 nova_compute[189491]: 2025-12-01 09:46:48.636 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Instance dc0d510c-4baf-4bcb-ab4f-de6ee48849c0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 09:46:48 compute-0 nova_compute[189491]: 2025-12-01 09:46:48.637 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Instance b6b22803-169f-45be-85f7-058bfa3f2970 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 09:46:48 compute-0 nova_compute[189491]: 2025-12-01 09:46:48.637 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 09:46:48 compute-0 nova_compute[189491]: 2025-12-01 09:46:48.638 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 09:46:48 compute-0 nova_compute[189491]: 2025-12-01 09:46:48.694 189495 DEBUG nova.compute.provider_tree [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Inventory has not changed in ProviderTree for provider: 143c7fe7-af1f-477a-978c-6a994d785d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 09:46:48 compute-0 nova_compute[189491]: 2025-12-01 09:46:48.714 189495 DEBUG nova.scheduler.client.report [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Inventory has not changed for provider 143c7fe7-af1f-477a-978c-6a994d785d98 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 09:46:48 compute-0 nova_compute[189491]: 2025-12-01 09:46:48.735 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 09:46:48 compute-0 nova_compute[189491]: 2025-12-01 09:46:48.736 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.170s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:46:51 compute-0 nova_compute[189491]: 2025-12-01 09:46:51.007 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:46:51 compute-0 nova_compute[189491]: 2025-12-01 09:46:51.737 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:46:51 compute-0 nova_compute[189491]: 2025-12-01 09:46:51.738 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:46:52 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:46:52.392 106766 DEBUG eventlet.wsgi.server [-] (106766) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m
Dec  1 09:46:52 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:46:52.394 106766 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /latest/meta-data/public-ipv4 HTTP/1.0#015
Dec  1 09:46:52 compute-0 ovn_metadata_agent[106654]: Accept: */*#015
Dec  1 09:46:52 compute-0 ovn_metadata_agent[106654]: Connection: close#015
Dec  1 09:46:52 compute-0 ovn_metadata_agent[106654]: Content-Type: text/plain#015
Dec  1 09:46:52 compute-0 ovn_metadata_agent[106654]: Host: 169.254.169.254#015
Dec  1 09:46:52 compute-0 ovn_metadata_agent[106654]: User-Agent: curl/7.84.0#015
Dec  1 09:46:52 compute-0 ovn_metadata_agent[106654]: X-Forwarded-For: 10.100.0.9#015
Dec  1 09:46:52 compute-0 ovn_metadata_agent[106654]: X-Ovn-Network-Id: 9a42964e-1108-49cc-ac3f-41165766e2ed __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m
Dec  1 09:46:52 compute-0 nova_compute[189491]: 2025-12-01 09:46:52.714 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:46:52 compute-0 nova_compute[189491]: 2025-12-01 09:46:52.721 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:46:52 compute-0 nova_compute[189491]: 2025-12-01 09:46:52.722 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 09:46:53 compute-0 nova_compute[189491]: 2025-12-01 09:46:53.221 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:46:53 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:46:53.535 106766 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m
Dec  1 09:46:53 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:46:53.538 106766 INFO eventlet.wsgi.server [-] 10.100.0.9,<local> "GET /latest/meta-data/public-ipv4 HTTP/1.1" status: 200  len: 151 time: 1.1439083#033[00m
Dec  1 09:46:53 compute-0 haproxy-metadata-proxy-9a42964e-1108-49cc-ac3f-41165766e2ed[254575]: 10.100.0.9:57020 [01/Dec/2025:09:46:52.390] listener listener/metadata 0/0/0/1147/1147 200 135 - - ---- 1/1/0/0/0 0/0 "GET /latest/meta-data/public-ipv4 HTTP/1.1"
Dec  1 09:46:53 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:46:53.665 106766 DEBUG eventlet.wsgi.server [-] (106766) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m
Dec  1 09:46:53 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:46:53.666 106766 DEBUG neutron.agent.ovn.metadata.server [-] Request: POST /openstack/2013-10-17/password HTTP/1.0#015
Dec  1 09:46:53 compute-0 ovn_metadata_agent[106654]: Accept: */*#015
Dec  1 09:46:53 compute-0 ovn_metadata_agent[106654]: Connection: close#015
Dec  1 09:46:53 compute-0 ovn_metadata_agent[106654]: Content-Length: 100#015
Dec  1 09:46:53 compute-0 ovn_metadata_agent[106654]: Content-Type: application/x-www-form-urlencoded#015
Dec  1 09:46:53 compute-0 ovn_metadata_agent[106654]: Host: 169.254.169.254#015
Dec  1 09:46:53 compute-0 ovn_metadata_agent[106654]: User-Agent: curl/7.84.0#015
Dec  1 09:46:53 compute-0 ovn_metadata_agent[106654]: X-Forwarded-For: 10.100.0.9#015
Dec  1 09:46:53 compute-0 ovn_metadata_agent[106654]: X-Ovn-Network-Id: 9a42964e-1108-49cc-ac3f-41165766e2ed#015
Dec  1 09:46:53 compute-0 ovn_metadata_agent[106654]: #015
Dec  1 09:46:53 compute-0 ovn_metadata_agent[106654]: testtesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttest __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m
Dec  1 09:46:53 compute-0 nova_compute[189491]: 2025-12-01 09:46:53.715 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:46:53 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:46:53.958 106766 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m
Dec  1 09:46:53 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:46:53.959 106766 INFO eventlet.wsgi.server [-] 10.100.0.9,<local> "POST /openstack/2013-10-17/password HTTP/1.1" status: 200  len: 134 time: 0.2931843#033[00m
Dec  1 09:46:53 compute-0 haproxy-metadata-proxy-9a42964e-1108-49cc-ac3f-41165766e2ed[254575]: 10.100.0.9:57022 [01/Dec/2025:09:46:53.664] listener listener/metadata 0/0/0/295/295 200 118 - - ---- 1/1/0/0/0 0/0 "POST /openstack/2013-10-17/password HTTP/1.1"
Dec  1 09:46:54 compute-0 nova_compute[189491]: 2025-12-01 09:46:54.715 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:46:55 compute-0 nova_compute[189491]: 2025-12-01 09:46:55.955 189495 DEBUG oslo_concurrency.lockutils [None req-a4b80e19-bdbd-4fa5-a2b8-e355463f80ae b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] Acquiring lock "b6b22803-169f-45be-85f7-058bfa3f2970" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:46:55 compute-0 nova_compute[189491]: 2025-12-01 09:46:55.956 189495 DEBUG oslo_concurrency.lockutils [None req-a4b80e19-bdbd-4fa5-a2b8-e355463f80ae b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] Lock "b6b22803-169f-45be-85f7-058bfa3f2970" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:46:55 compute-0 nova_compute[189491]: 2025-12-01 09:46:55.957 189495 DEBUG oslo_concurrency.lockutils [None req-a4b80e19-bdbd-4fa5-a2b8-e355463f80ae b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] Acquiring lock "b6b22803-169f-45be-85f7-058bfa3f2970-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:46:55 compute-0 nova_compute[189491]: 2025-12-01 09:46:55.957 189495 DEBUG oslo_concurrency.lockutils [None req-a4b80e19-bdbd-4fa5-a2b8-e355463f80ae b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] Lock "b6b22803-169f-45be-85f7-058bfa3f2970-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:46:55 compute-0 nova_compute[189491]: 2025-12-01 09:46:55.958 189495 DEBUG oslo_concurrency.lockutils [None req-a4b80e19-bdbd-4fa5-a2b8-e355463f80ae b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] Lock "b6b22803-169f-45be-85f7-058bfa3f2970-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:46:55 compute-0 nova_compute[189491]: 2025-12-01 09:46:55.959 189495 INFO nova.compute.manager [None req-a4b80e19-bdbd-4fa5-a2b8-e355463f80ae b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] [instance: b6b22803-169f-45be-85f7-058bfa3f2970] Terminating instance#033[00m
Dec  1 09:46:55 compute-0 nova_compute[189491]: 2025-12-01 09:46:55.960 189495 DEBUG nova.compute.manager [None req-a4b80e19-bdbd-4fa5-a2b8-e355463f80ae b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] [instance: b6b22803-169f-45be-85f7-058bfa3f2970] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  1 09:46:55 compute-0 kernel: tap05122117-05 (unregistering): left promiscuous mode
Dec  1 09:46:56 compute-0 NetworkManager[56318]: <info>  [1764582416.0101] device (tap05122117-05): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  1 09:46:56 compute-0 nova_compute[189491]: 2025-12-01 09:46:56.017 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:46:56 compute-0 ovn_controller[97794]: 2025-12-01T09:46:56Z|00169|binding|INFO|Releasing lport 05122117-0522-4844-80d6-4425d6fae978 from this chassis (sb_readonly=0)
Dec  1 09:46:56 compute-0 ovn_controller[97794]: 2025-12-01T09:46:56Z|00170|binding|INFO|Setting lport 05122117-0522-4844-80d6-4425d6fae978 down in Southbound
Dec  1 09:46:56 compute-0 ovn_controller[97794]: 2025-12-01T09:46:56Z|00171|binding|INFO|Removing iface tap05122117-05 ovn-installed in OVS
Dec  1 09:46:56 compute-0 nova_compute[189491]: 2025-12-01 09:46:56.022 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  1 09:46:56 compute-0 nova_compute[189491]: 2025-12-01 09:46:56.024 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:46:56 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:46:56.027 106659 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:af:65:c9 10.100.0.9'], port_security=['fa:16:3e:af:65:c9 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'b6b22803-169f-45be-85f7-058bfa3f2970', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9a42964e-1108-49cc-ac3f-41165766e2ed', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db1d07a763fd4c1d806a7cf648ffae54', 'neutron:revision_number': '4', 'neutron:security_group_ids': '069c984d-c26e-4a65-8713-d57ad23780ec a20c149f-05db-4aff-83b9-441644898711', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.191'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3f98b73b-931c-4f7b-978d-72f3c89b3942, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff7f12c3670>], logical_port=05122117-0522-4844-80d6-4425d6fae978) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff7f12c3670>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 09:46:56 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:46:56.029 106659 INFO neutron.agent.ovn.metadata.agent [-] Port 05122117-0522-4844-80d6-4425d6fae978 in datapath 9a42964e-1108-49cc-ac3f-41165766e2ed unbound from our chassis#033[00m
Dec  1 09:46:56 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:46:56.030 106659 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9a42964e-1108-49cc-ac3f-41165766e2ed, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  1 09:46:56 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:46:56.032 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[3541c0ba-b226-4790-bdd5-b1c2cf0240c6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:46:56 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:46:56.033 106659 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9a42964e-1108-49cc-ac3f-41165766e2ed namespace which is not needed anymore#033[00m
Dec  1 09:46:56 compute-0 nova_compute[189491]: 2025-12-01 09:46:56.063 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:46:56 compute-0 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000000d.scope: Deactivated successfully.
Dec  1 09:46:56 compute-0 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000000d.scope: Consumed 41.946s CPU time.
Dec  1 09:46:56 compute-0 systemd-machined[155812]: Machine qemu-14-instance-0000000d terminated.
Dec  1 09:46:56 compute-0 podman[255373]: 2025-12-01 09:46:56.130321512 +0000 UTC m=+0.084713229 container health_status 6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  1 09:46:56 compute-0 podman[255376]: 2025-12-01 09:46:56.153958669 +0000 UTC m=+0.116945806 container health_status ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Dec  1 09:46:56 compute-0 neutron-haproxy-ovnmeta-9a42964e-1108-49cc-ac3f-41165766e2ed[254569]: [NOTICE]   (254573) : haproxy version is 2.8.14-c23fe91
Dec  1 09:46:56 compute-0 neutron-haproxy-ovnmeta-9a42964e-1108-49cc-ac3f-41165766e2ed[254569]: [NOTICE]   (254573) : path to executable is /usr/sbin/haproxy
Dec  1 09:46:56 compute-0 neutron-haproxy-ovnmeta-9a42964e-1108-49cc-ac3f-41165766e2ed[254569]: [ALERT]    (254573) : Current worker (254575) exited with code 143 (Terminated)
Dec  1 09:46:56 compute-0 neutron-haproxy-ovnmeta-9a42964e-1108-49cc-ac3f-41165766e2ed[254569]: [WARNING]  (254573) : All workers exited. Exiting... (0)
Dec  1 09:46:56 compute-0 systemd[1]: libpod-6ea0356d09770beedba0b32e9ab16b2b6ec629cc69571297bd28fdb8293639b1.scope: Deactivated successfully.
Dec  1 09:46:56 compute-0 podman[255433]: 2025-12-01 09:46:56.210450229 +0000 UTC m=+0.062570159 container died 6ea0356d09770beedba0b32e9ab16b2b6ec629cc69571297bd28fdb8293639b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9a42964e-1108-49cc-ac3f-41165766e2ed, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, io.buildah.version=1.41.3)
Dec  1 09:46:56 compute-0 nova_compute[189491]: 2025-12-01 09:46:56.250 189495 INFO nova.virt.libvirt.driver [-] [instance: b6b22803-169f-45be-85f7-058bfa3f2970] Instance destroyed successfully.#033[00m
Dec  1 09:46:56 compute-0 nova_compute[189491]: 2025-12-01 09:46:56.251 189495 DEBUG nova.objects.instance [None req-a4b80e19-bdbd-4fa5-a2b8-e355463f80ae b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] Lazy-loading 'resources' on Instance uuid b6b22803-169f-45be-85f7-058bfa3f2970 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 09:46:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-b4585a309b274e2655d189ab45ac5994ff00c1bfaab1a29917668f2d82e03a91-merged.mount: Deactivated successfully.
Dec  1 09:46:56 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6ea0356d09770beedba0b32e9ab16b2b6ec629cc69571297bd28fdb8293639b1-userdata-shm.mount: Deactivated successfully.
Dec  1 09:46:56 compute-0 podman[255433]: 2025-12-01 09:46:56.265605716 +0000 UTC m=+0.117725616 container cleanup 6ea0356d09770beedba0b32e9ab16b2b6ec629cc69571297bd28fdb8293639b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9a42964e-1108-49cc-ac3f-41165766e2ed, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125)
Dec  1 09:46:56 compute-0 nova_compute[189491]: 2025-12-01 09:46:56.275 189495 DEBUG nova.virt.libvirt.vif [None req-a4b80e19-bdbd-4fa5-a2b8-e355463f80ae b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-01T09:45:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-1504290779',display_name='tempest-TestServerBasicOps-server-1504290779',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-1504290779',id=13,image_ref='7ddeffd1-d06f-4a46-9e41-114974daa90e',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEsyVwDEy9zFWo1byh4pafiOXmiB/WkK4D/hrDdFOv34J8k/xsRd1CCuGmvU2MUbCoy8qNShC4AQphvN5GZVeRhwJHN24UHvx0V+AFb/wVWYzmICwY2RteV99ijJRZ3ZZg==',key_name='tempest-TestServerBasicOps-1010317755',keypairs=<?>,launch_index=0,launched_at=2025-12-01T09:45:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='db1d07a763fd4c1d806a7cf648ffae54',ramdisk_id='',reservation_id='r-mnemcuob',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7ddeffd1-d06f-4a46-9e41-114974daa90e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestServerBasicOps-818581629',owner_user_name='tempest-TestServerBasicOps-818581629-project-member',password_0='testtesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttest',password_1='',password_2='',password_3=''},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-01T09:46:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b40ddefd6a0e437e95ddb1bc36d5ec0b',uuid=b6b22803-169f-45be-85f7-058bfa3f2970,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "05122117-0522-4844-80d6-4425d6fae978", "address": "fa:16:3e:af:65:c9", "network": {"id": "9a42964e-1108-49cc-ac3f-41165766e2ed", "bridge": "br-int", "label": "tempest-TestServerBasicOps-201869635-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db1d07a763fd4c1d806a7cf648ffae54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05122117-05", "ovs_interfaceid": "05122117-0522-4844-80d6-4425d6fae978", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  1 09:46:56 compute-0 nova_compute[189491]: 2025-12-01 09:46:56.276 189495 DEBUG nova.network.os_vif_util [None req-a4b80e19-bdbd-4fa5-a2b8-e355463f80ae b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] Converting VIF {"id": "05122117-0522-4844-80d6-4425d6fae978", "address": "fa:16:3e:af:65:c9", "network": {"id": "9a42964e-1108-49cc-ac3f-41165766e2ed", "bridge": "br-int", "label": "tempest-TestServerBasicOps-201869635-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db1d07a763fd4c1d806a7cf648ffae54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05122117-05", "ovs_interfaceid": "05122117-0522-4844-80d6-4425d6fae978", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 09:46:56 compute-0 nova_compute[189491]: 2025-12-01 09:46:56.278 189495 DEBUG nova.network.os_vif_util [None req-a4b80e19-bdbd-4fa5-a2b8-e355463f80ae b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:af:65:c9,bridge_name='br-int',has_traffic_filtering=True,id=05122117-0522-4844-80d6-4425d6fae978,network=Network(9a42964e-1108-49cc-ac3f-41165766e2ed),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05122117-05') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 09:46:56 compute-0 nova_compute[189491]: 2025-12-01 09:46:56.279 189495 DEBUG os_vif [None req-a4b80e19-bdbd-4fa5-a2b8-e355463f80ae b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:af:65:c9,bridge_name='br-int',has_traffic_filtering=True,id=05122117-0522-4844-80d6-4425d6fae978,network=Network(9a42964e-1108-49cc-ac3f-41165766e2ed),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05122117-05') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  1 09:46:56 compute-0 nova_compute[189491]: 2025-12-01 09:46:56.282 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:46:56 compute-0 nova_compute[189491]: 2025-12-01 09:46:56.282 189495 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap05122117-05, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:46:56 compute-0 nova_compute[189491]: 2025-12-01 09:46:56.285 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:46:56 compute-0 nova_compute[189491]: 2025-12-01 09:46:56.286 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:46:56 compute-0 nova_compute[189491]: 2025-12-01 09:46:56.290 189495 INFO os_vif [None req-a4b80e19-bdbd-4fa5-a2b8-e355463f80ae b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:af:65:c9,bridge_name='br-int',has_traffic_filtering=True,id=05122117-0522-4844-80d6-4425d6fae978,network=Network(9a42964e-1108-49cc-ac3f-41165766e2ed),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05122117-05')#033[00m
Dec  1 09:46:56 compute-0 nova_compute[189491]: 2025-12-01 09:46:56.290 189495 INFO nova.virt.libvirt.driver [None req-a4b80e19-bdbd-4fa5-a2b8-e355463f80ae b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] [instance: b6b22803-169f-45be-85f7-058bfa3f2970] Deleting instance files /var/lib/nova/instances/b6b22803-169f-45be-85f7-058bfa3f2970_del#033[00m
Dec  1 09:46:56 compute-0 nova_compute[189491]: 2025-12-01 09:46:56.291 189495 INFO nova.virt.libvirt.driver [None req-a4b80e19-bdbd-4fa5-a2b8-e355463f80ae b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] [instance: b6b22803-169f-45be-85f7-058bfa3f2970] Deletion of /var/lib/nova/instances/b6b22803-169f-45be-85f7-058bfa3f2970_del complete#033[00m
Dec  1 09:46:56 compute-0 systemd[1]: libpod-conmon-6ea0356d09770beedba0b32e9ab16b2b6ec629cc69571297bd28fdb8293639b1.scope: Deactivated successfully.
Dec  1 09:46:56 compute-0 podman[255476]: 2025-12-01 09:46:56.350114869 +0000 UTC m=+0.055278510 container remove 6ea0356d09770beedba0b32e9ab16b2b6ec629cc69571297bd28fdb8293639b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9a42964e-1108-49cc-ac3f-41165766e2ed, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  1 09:46:56 compute-0 nova_compute[189491]: 2025-12-01 09:46:56.358 189495 INFO nova.compute.manager [None req-a4b80e19-bdbd-4fa5-a2b8-e355463f80ae b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] [instance: b6b22803-169f-45be-85f7-058bfa3f2970] Took 0.40 seconds to destroy the instance on the hypervisor.#033[00m
Dec  1 09:46:56 compute-0 nova_compute[189491]: 2025-12-01 09:46:56.359 189495 DEBUG oslo.service.loopingcall [None req-a4b80e19-bdbd-4fa5-a2b8-e355463f80ae b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  1 09:46:56 compute-0 nova_compute[189491]: 2025-12-01 09:46:56.359 189495 DEBUG nova.compute.manager [-] [instance: b6b22803-169f-45be-85f7-058bfa3f2970] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  1 09:46:56 compute-0 nova_compute[189491]: 2025-12-01 09:46:56.360 189495 DEBUG nova.network.neutron [-] [instance: b6b22803-169f-45be-85f7-058bfa3f2970] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  1 09:46:56 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:46:56.371 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[a8f79377-58e0-47ec-a792-ede4d496c770]: (4, ('Mon Dec  1 09:46:56 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-9a42964e-1108-49cc-ac3f-41165766e2ed (6ea0356d09770beedba0b32e9ab16b2b6ec629cc69571297bd28fdb8293639b1)\n6ea0356d09770beedba0b32e9ab16b2b6ec629cc69571297bd28fdb8293639b1\nMon Dec  1 09:46:56 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-9a42964e-1108-49cc-ac3f-41165766e2ed (6ea0356d09770beedba0b32e9ab16b2b6ec629cc69571297bd28fdb8293639b1)\n6ea0356d09770beedba0b32e9ab16b2b6ec629cc69571297bd28fdb8293639b1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:46:56 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:46:56.376 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[f79a7a30-5385-4271-a6fb-82bbea89d2c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:46:56 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:46:56.378 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9a42964e-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:46:56 compute-0 nova_compute[189491]: 2025-12-01 09:46:56.380 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:46:56 compute-0 kernel: tap9a42964e-10: left promiscuous mode
Dec  1 09:46:56 compute-0 nova_compute[189491]: 2025-12-01 09:46:56.385 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:46:56 compute-0 nova_compute[189491]: 2025-12-01 09:46:56.405 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:46:56 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:46:56.405 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[9e570f30-f494-4cb1-a2b1-974b76b12644]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:46:56 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:46:56.419 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[bd3c97ef-8757-48f8-ad5e-d616ac7f7e29]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:46:56 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:46:56.421 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[978dbfde-98fa-4d7d-aa56-037152f7a1bd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:46:56 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:46:56.436 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[1f65ce2a-c721-4c1d-b7b8-58b5af6cfac3]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 558499, 'reachable_time': 25494, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 255489, 'error': None, 'target': 'ovnmeta-9a42964e-1108-49cc-ac3f-41165766e2ed', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:46:56 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:46:56.439 106797 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9a42964e-1108-49cc-ac3f-41165766e2ed deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  1 09:46:56 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:46:56.439 106797 DEBUG oslo.privsep.daemon [-] privsep: reply[626cf3bf-e51b-4c4d-8679-e22b4f1074d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:46:56 compute-0 systemd[1]: run-netns-ovnmeta\x2d9a42964e\x2d1108\x2d49cc\x2dac3f\x2d41165766e2ed.mount: Deactivated successfully.
Dec  1 09:46:56 compute-0 nova_compute[189491]: 2025-12-01 09:46:56.536 189495 DEBUG nova.compute.manager [req-1dea70d9-90f3-4487-8320-278cbbd09813 req-0bca6795-0452-4359-a4a5-3f82bb96d6d6 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: b6b22803-169f-45be-85f7-058bfa3f2970] Received event network-vif-unplugged-05122117-0522-4844-80d6-4425d6fae978 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 09:46:56 compute-0 nova_compute[189491]: 2025-12-01 09:46:56.537 189495 DEBUG oslo_concurrency.lockutils [req-1dea70d9-90f3-4487-8320-278cbbd09813 req-0bca6795-0452-4359-a4a5-3f82bb96d6d6 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Acquiring lock "b6b22803-169f-45be-85f7-058bfa3f2970-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:46:56 compute-0 nova_compute[189491]: 2025-12-01 09:46:56.537 189495 DEBUG oslo_concurrency.lockutils [req-1dea70d9-90f3-4487-8320-278cbbd09813 req-0bca6795-0452-4359-a4a5-3f82bb96d6d6 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Lock "b6b22803-169f-45be-85f7-058bfa3f2970-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:46:56 compute-0 nova_compute[189491]: 2025-12-01 09:46:56.537 189495 DEBUG oslo_concurrency.lockutils [req-1dea70d9-90f3-4487-8320-278cbbd09813 req-0bca6795-0452-4359-a4a5-3f82bb96d6d6 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Lock "b6b22803-169f-45be-85f7-058bfa3f2970-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:46:56 compute-0 nova_compute[189491]: 2025-12-01 09:46:56.538 189495 DEBUG nova.compute.manager [req-1dea70d9-90f3-4487-8320-278cbbd09813 req-0bca6795-0452-4359-a4a5-3f82bb96d6d6 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: b6b22803-169f-45be-85f7-058bfa3f2970] No waiting events found dispatching network-vif-unplugged-05122117-0522-4844-80d6-4425d6fae978 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 09:46:56 compute-0 nova_compute[189491]: 2025-12-01 09:46:56.538 189495 DEBUG nova.compute.manager [req-1dea70d9-90f3-4487-8320-278cbbd09813 req-0bca6795-0452-4359-a4a5-3f82bb96d6d6 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: b6b22803-169f-45be-85f7-058bfa3f2970] Received event network-vif-unplugged-05122117-0522-4844-80d6-4425d6fae978 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  1 09:46:57 compute-0 nova_compute[189491]: 2025-12-01 09:46:57.606 189495 DEBUG nova.network.neutron [-] [instance: b6b22803-169f-45be-85f7-058bfa3f2970] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 09:46:57 compute-0 nova_compute[189491]: 2025-12-01 09:46:57.624 189495 INFO nova.compute.manager [-] [instance: b6b22803-169f-45be-85f7-058bfa3f2970] Took 1.26 seconds to deallocate network for instance.#033[00m
Dec  1 09:46:57 compute-0 nova_compute[189491]: 2025-12-01 09:46:57.669 189495 DEBUG oslo_concurrency.lockutils [None req-a4b80e19-bdbd-4fa5-a2b8-e355463f80ae b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:46:57 compute-0 nova_compute[189491]: 2025-12-01 09:46:57.670 189495 DEBUG oslo_concurrency.lockutils [None req-a4b80e19-bdbd-4fa5-a2b8-e355463f80ae b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:46:57 compute-0 nova_compute[189491]: 2025-12-01 09:46:57.744 189495 DEBUG nova.compute.provider_tree [None req-a4b80e19-bdbd-4fa5-a2b8-e355463f80ae b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] Inventory has not changed in ProviderTree for provider: 143c7fe7-af1f-477a-978c-6a994d785d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 09:46:57 compute-0 nova_compute[189491]: 2025-12-01 09:46:57.758 189495 DEBUG nova.scheduler.client.report [None req-a4b80e19-bdbd-4fa5-a2b8-e355463f80ae b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] Inventory has not changed for provider 143c7fe7-af1f-477a-978c-6a994d785d98 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 09:46:57 compute-0 nova_compute[189491]: 2025-12-01 09:46:57.781 189495 DEBUG oslo_concurrency.lockutils [None req-a4b80e19-bdbd-4fa5-a2b8-e355463f80ae b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.111s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:46:57 compute-0 nova_compute[189491]: 2025-12-01 09:46:57.810 189495 INFO nova.scheduler.client.report [None req-a4b80e19-bdbd-4fa5-a2b8-e355463f80ae b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] Deleted allocations for instance b6b22803-169f-45be-85f7-058bfa3f2970#033[00m
Dec  1 09:46:57 compute-0 nova_compute[189491]: 2025-12-01 09:46:57.882 189495 DEBUG oslo_concurrency.lockutils [None req-a4b80e19-bdbd-4fa5-a2b8-e355463f80ae b40ddefd6a0e437e95ddb1bc36d5ec0b db1d07a763fd4c1d806a7cf648ffae54 - - default default] Lock "b6b22803-169f-45be-85f7-058bfa3f2970" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.926s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:46:58 compute-0 nova_compute[189491]: 2025-12-01 09:46:58.223 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:46:58 compute-0 nova_compute[189491]: 2025-12-01 09:46:58.630 189495 DEBUG nova.compute.manager [req-f30c8069-ee7f-4a3e-9252-7b31f7c536f2 req-ff24a7ee-c0bb-4add-ae7c-6caa18af8bbc ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: b6b22803-169f-45be-85f7-058bfa3f2970] Received event network-vif-plugged-05122117-0522-4844-80d6-4425d6fae978 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 09:46:58 compute-0 nova_compute[189491]: 2025-12-01 09:46:58.631 189495 DEBUG oslo_concurrency.lockutils [req-f30c8069-ee7f-4a3e-9252-7b31f7c536f2 req-ff24a7ee-c0bb-4add-ae7c-6caa18af8bbc ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Acquiring lock "b6b22803-169f-45be-85f7-058bfa3f2970-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:46:58 compute-0 nova_compute[189491]: 2025-12-01 09:46:58.631 189495 DEBUG oslo_concurrency.lockutils [req-f30c8069-ee7f-4a3e-9252-7b31f7c536f2 req-ff24a7ee-c0bb-4add-ae7c-6caa18af8bbc ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Lock "b6b22803-169f-45be-85f7-058bfa3f2970-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:46:58 compute-0 nova_compute[189491]: 2025-12-01 09:46:58.632 189495 DEBUG oslo_concurrency.lockutils [req-f30c8069-ee7f-4a3e-9252-7b31f7c536f2 req-ff24a7ee-c0bb-4add-ae7c-6caa18af8bbc ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Lock "b6b22803-169f-45be-85f7-058bfa3f2970-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:46:58 compute-0 nova_compute[189491]: 2025-12-01 09:46:58.632 189495 DEBUG nova.compute.manager [req-f30c8069-ee7f-4a3e-9252-7b31f7c536f2 req-ff24a7ee-c0bb-4add-ae7c-6caa18af8bbc ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: b6b22803-169f-45be-85f7-058bfa3f2970] No waiting events found dispatching network-vif-plugged-05122117-0522-4844-80d6-4425d6fae978 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 09:46:58 compute-0 nova_compute[189491]: 2025-12-01 09:46:58.632 189495 WARNING nova.compute.manager [req-f30c8069-ee7f-4a3e-9252-7b31f7c536f2 req-ff24a7ee-c0bb-4add-ae7c-6caa18af8bbc ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: b6b22803-169f-45be-85f7-058bfa3f2970] Received unexpected event network-vif-plugged-05122117-0522-4844-80d6-4425d6fae978 for instance with vm_state deleted and task_state None.#033[00m
Dec  1 09:46:58 compute-0 nova_compute[189491]: 2025-12-01 09:46:58.633 189495 DEBUG nova.compute.manager [req-f30c8069-ee7f-4a3e-9252-7b31f7c536f2 req-ff24a7ee-c0bb-4add-ae7c-6caa18af8bbc ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: b6b22803-169f-45be-85f7-058bfa3f2970] Received event network-vif-deleted-05122117-0522-4844-80d6-4425d6fae978 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 09:46:59 compute-0 podman[203700]: time="2025-12-01T09:46:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 09:46:59 compute-0 podman[203700]: @ - - [01/Dec/2025:09:46:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Dec  1 09:46:59 compute-0 podman[203700]: @ - - [01/Dec/2025:09:46:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4815 "" "Go-http-client/1.1"
Dec  1 09:47:01 compute-0 nova_compute[189491]: 2025-12-01 09:47:01.286 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:47:01 compute-0 openstack_network_exporter[205866]: ERROR   09:47:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:47:01 compute-0 openstack_network_exporter[205866]: ERROR   09:47:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:47:01 compute-0 openstack_network_exporter[205866]: ERROR   09:47:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 09:47:01 compute-0 openstack_network_exporter[205866]: ERROR   09:47:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 09:47:01 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:47:01 compute-0 openstack_network_exporter[205866]: ERROR   09:47:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 09:47:01 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:47:03 compute-0 nova_compute[189491]: 2025-12-01 09:47:03.227 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:47:03 compute-0 ovn_controller[97794]: 2025-12-01T09:47:03Z|00172|binding|INFO|Releasing lport 7159c06b-520e-4157-9235-0b4ddbac66cf from this chassis (sb_readonly=0)
Dec  1 09:47:03 compute-0 nova_compute[189491]: 2025-12-01 09:47:03.689 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:47:03 compute-0 ovn_controller[97794]: 2025-12-01T09:47:03Z|00173|binding|INFO|Releasing lport 7159c06b-520e-4157-9235-0b4ddbac66cf from this chassis (sb_readonly=0)
Dec  1 09:47:03 compute-0 nova_compute[189491]: 2025-12-01 09:47:03.950 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:47:04 compute-0 podman[255491]: 2025-12-01 09:47:04.719133851 +0000 UTC m=+0.093497084 container health_status dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  1 09:47:04 compute-0 podman[255492]: 2025-12-01 09:47:04.734103096 +0000 UTC m=+0.101846318 container health_status e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0)
Dec  1 09:47:04 compute-0 podman[255493]: 2025-12-01 09:47:04.757942548 +0000 UTC m=+0.115641724 container health_status f2d8639519b361376780abb83cb5a5e341c4f412720fb5852b2a4d261a69c359 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.tags=base rhel9, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, version=9.4, com.redhat.component=ubi9-container, config_id=edpm, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=)
Dec  1 09:47:06 compute-0 nova_compute[189491]: 2025-12-01 09:47:06.290 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:47:08 compute-0 nova_compute[189491]: 2025-12-01 09:47:08.230 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:47:09 compute-0 nova_compute[189491]: 2025-12-01 09:47:09.713 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:47:09 compute-0 nova_compute[189491]: 2025-12-01 09:47:09.714 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:47:09 compute-0 nova_compute[189491]: 2025-12-01 09:47:09.715 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:47:09 compute-0 nova_compute[189491]: 2025-12-01 09:47:09.715 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:47:09 compute-0 nova_compute[189491]: 2025-12-01 09:47:09.716 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:47:09 compute-0 nova_compute[189491]: 2025-12-01 09:47:09.716 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:47:09 compute-0 nova_compute[189491]: 2025-12-01 09:47:09.716 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:47:09 compute-0 nova_compute[189491]: 2025-12-01 09:47:09.744 189495 DEBUG nova.virt.libvirt.imagecache [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Adding ephemeral_1_0706d66 into backend ephemeral images _store_ephemeral_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:100#033[00m
Dec  1 09:47:09 compute-0 nova_compute[189491]: 2025-12-01 09:47:09.764 189495 DEBUG nova.virt.libvirt.imagecache [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314#033[00m
Dec  1 09:47:09 compute-0 nova_compute[189491]: 2025-12-01 09:47:09.764 189495 DEBUG nova.virt.libvirt.imagecache [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Image id 280f4e4d-4a12-4164-a687-6106a9afc7fe yields fingerprint 8b917e1e1f61d3c861f59bffbbb40426a7633e75 _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319#033[00m
Dec  1 09:47:09 compute-0 nova_compute[189491]: 2025-12-01 09:47:09.765 189495 INFO nova.virt.libvirt.imagecache [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] image 280f4e4d-4a12-4164-a687-6106a9afc7fe at (/var/lib/nova/instances/_base/8b917e1e1f61d3c861f59bffbbb40426a7633e75): checking#033[00m
Dec  1 09:47:09 compute-0 nova_compute[189491]: 2025-12-01 09:47:09.765 189495 DEBUG nova.virt.libvirt.imagecache [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] image 280f4e4d-4a12-4164-a687-6106a9afc7fe at (/var/lib/nova/instances/_base/8b917e1e1f61d3c861f59bffbbb40426a7633e75): image is in use _mark_in_use /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:279#033[00m
Dec  1 09:47:09 compute-0 nova_compute[189491]: 2025-12-01 09:47:09.768 189495 DEBUG nova.virt.libvirt.imagecache [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Image id  yields fingerprint da39a3ee5e6b4b0d3255bfef95601890afd80709 _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319#033[00m
Dec  1 09:47:09 compute-0 nova_compute[189491]: 2025-12-01 09:47:09.769 189495 DEBUG nova.virt.libvirt.imagecache [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0 is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126#033[00m
Dec  1 09:47:09 compute-0 nova_compute[189491]: 2025-12-01 09:47:09.769 189495 DEBUG nova.virt.libvirt.imagecache [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0 has a disk file _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:129#033[00m
Dec  1 09:47:09 compute-0 nova_compute[189491]: 2025-12-01 09:47:09.770 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:47:09 compute-0 nova_compute[189491]: 2025-12-01 09:47:09.872 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk --force-share --output=json" returned: 0 in 0.102s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:47:09 compute-0 nova_compute[189491]: 2025-12-01 09:47:09.873 189495 DEBUG nova.virt.libvirt.imagecache [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Instance dc0d510c-4baf-4bcb-ab4f-de6ee48849c0 is backed by 8b917e1e1f61d3c861f59bffbbb40426a7633e75 _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:141#033[00m
Dec  1 09:47:09 compute-0 nova_compute[189491]: 2025-12-01 09:47:09.873 189495 WARNING nova.virt.libvirt.imagecache [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/ace19d5a50ba51d9cdb8d0e36f5ab43d2c0f33b5#033[00m
Dec  1 09:47:09 compute-0 nova_compute[189491]: 2025-12-01 09:47:09.874 189495 WARNING nova.virt.libvirt.imagecache [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/3bf6c54845f5e9621e4fb27f7d70d848ea2fd366#033[00m
Dec  1 09:47:09 compute-0 nova_compute[189491]: 2025-12-01 09:47:09.874 189495 WARNING nova.virt.libvirt.imagecache [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/bfffb0fe9ffb8885e11e7a8e92aeafe5ed4e87fd#033[00m
Dec  1 09:47:09 compute-0 nova_compute[189491]: 2025-12-01 09:47:09.874 189495 INFO nova.virt.libvirt.imagecache [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Active base files: /var/lib/nova/instances/_base/8b917e1e1f61d3c861f59bffbbb40426a7633e75#033[00m
Dec  1 09:47:09 compute-0 nova_compute[189491]: 2025-12-01 09:47:09.875 189495 INFO nova.virt.libvirt.imagecache [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Removable base files: /var/lib/nova/instances/_base/ace19d5a50ba51d9cdb8d0e36f5ab43d2c0f33b5 /var/lib/nova/instances/_base/3bf6c54845f5e9621e4fb27f7d70d848ea2fd366 /var/lib/nova/instances/_base/bfffb0fe9ffb8885e11e7a8e92aeafe5ed4e87fd#033[00m
Dec  1 09:47:09 compute-0 nova_compute[189491]: 2025-12-01 09:47:09.875 189495 INFO nova.virt.libvirt.imagecache [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/ace19d5a50ba51d9cdb8d0e36f5ab43d2c0f33b5#033[00m
Dec  1 09:47:09 compute-0 nova_compute[189491]: 2025-12-01 09:47:09.876 189495 INFO nova.virt.libvirt.imagecache [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/3bf6c54845f5e9621e4fb27f7d70d848ea2fd366#033[00m
Dec  1 09:47:09 compute-0 nova_compute[189491]: 2025-12-01 09:47:09.876 189495 INFO nova.virt.libvirt.imagecache [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/bfffb0fe9ffb8885e11e7a8e92aeafe5ed4e87fd#033[00m
Dec  1 09:47:09 compute-0 nova_compute[189491]: 2025-12-01 09:47:09.877 189495 DEBUG nova.virt.libvirt.imagecache [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350#033[00m
Dec  1 09:47:09 compute-0 nova_compute[189491]: 2025-12-01 09:47:09.877 189495 DEBUG nova.virt.libvirt.imagecache [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299#033[00m
Dec  1 09:47:09 compute-0 nova_compute[189491]: 2025-12-01 09:47:09.877 189495 DEBUG nova.virt.libvirt.imagecache [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284#033[00m
Dec  1 09:47:09 compute-0 nova_compute[189491]: 2025-12-01 09:47:09.878 189495 INFO nova.virt.libvirt.imagecache [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/ephemeral_1_0706d66#033[00m
Dec  1 09:47:11 compute-0 nova_compute[189491]: 2025-12-01 09:47:11.248 189495 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764582416.2458327, b6b22803-169f-45be-85f7-058bfa3f2970 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 09:47:11 compute-0 nova_compute[189491]: 2025-12-01 09:47:11.249 189495 INFO nova.compute.manager [-] [instance: b6b22803-169f-45be-85f7-058bfa3f2970] VM Stopped (Lifecycle Event)#033[00m
Dec  1 09:47:11 compute-0 nova_compute[189491]: 2025-12-01 09:47:11.268 189495 DEBUG nova.compute.manager [None req-cb16480a-231c-4a8d-9607-0daa8d884674 - - - - - -] [instance: b6b22803-169f-45be-85f7-058bfa3f2970] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 09:47:11 compute-0 nova_compute[189491]: 2025-12-01 09:47:11.293 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:47:12 compute-0 podman[255554]: 2025-12-01 09:47:12.687349626 +0000 UTC m=+0.065056960 container health_status 110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, distribution-scope=public, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, version=9.6, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, config_id=edpm)
Dec  1 09:47:12 compute-0 podman[255555]: 2025-12-01 09:47:12.721308445 +0000 UTC m=+0.093496274 container health_status f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec  1 09:47:13 compute-0 nova_compute[189491]: 2025-12-01 09:47:13.233 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:47:15 compute-0 podman[255593]: 2025-12-01 09:47:15.696845255 +0000 UTC m=+0.071573169 container health_status 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_id=multipathd, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  1 09:47:15 compute-0 podman[255594]: 2025-12-01 09:47:15.774220554 +0000 UTC m=+0.143637658 container health_status 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller)
Dec  1 09:47:16 compute-0 nova_compute[189491]: 2025-12-01 09:47:16.296 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:47:18 compute-0 nova_compute[189491]: 2025-12-01 09:47:18.235 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:47:21 compute-0 nova_compute[189491]: 2025-12-01 09:47:21.300 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:47:23 compute-0 nova_compute[189491]: 2025-12-01 09:47:23.238 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:47:26 compute-0 nova_compute[189491]: 2025-12-01 09:47:26.303 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:47:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:47:26.539 106659 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:47:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:47:26.540 106659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:47:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:47:26.541 106659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:47:26 compute-0 podman[255636]: 2025-12-01 09:47:26.698368479 +0000 UTC m=+0.074974591 container health_status 6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  1 09:47:26 compute-0 podman[255637]: 2025-12-01 09:47:26.708283521 +0000 UTC m=+0.075525324 container health_status ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4)
Dec  1 09:47:28 compute-0 nova_compute[189491]: 2025-12-01 09:47:28.242 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:47:29 compute-0 podman[203700]: time="2025-12-01T09:47:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 09:47:29 compute-0 podman[203700]: @ - - [01/Dec/2025:09:47:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Dec  1 09:47:29 compute-0 podman[203700]: @ - - [01/Dec/2025:09:47:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4813 "" "Go-http-client/1.1"
Dec  1 09:47:31 compute-0 nova_compute[189491]: 2025-12-01 09:47:31.306 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:47:31 compute-0 openstack_network_exporter[205866]: ERROR   09:47:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:47:31 compute-0 openstack_network_exporter[205866]: ERROR   09:47:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:47:31 compute-0 openstack_network_exporter[205866]: ERROR   09:47:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 09:47:31 compute-0 openstack_network_exporter[205866]: ERROR   09:47:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 09:47:31 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:47:31 compute-0 openstack_network_exporter[205866]: ERROR   09:47:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 09:47:31 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:47:33 compute-0 nova_compute[189491]: 2025-12-01 09:47:33.246 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:47:35 compute-0 podman[255677]: 2025-12-01 09:47:35.715455816 +0000 UTC m=+0.073053755 container health_status dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  1 09:47:35 compute-0 nova_compute[189491]: 2025-12-01 09:47:35.716 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:47:35 compute-0 nova_compute[189491]: 2025-12-01 09:47:35.716 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  1 09:47:35 compute-0 podman[255678]: 2025-12-01 09:47:35.72990882 +0000 UTC m=+0.079570715 container health_status e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  1 09:47:35 compute-0 podman[255679]: 2025-12-01 09:47:35.757806861 +0000 UTC m=+0.095380560 container health_status f2d8639519b361376780abb83cb5a5e341c4f412720fb5852b2a4d261a69c359 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, release-0.7.12=, container_name=kepler, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, config_id=edpm, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, release=1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc.)
Dec  1 09:47:36 compute-0 nova_compute[189491]: 2025-12-01 09:47:36.308 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:47:38 compute-0 ovn_controller[97794]: 2025-12-01T09:47:38Z|00174|memory_trim|INFO|Detected inactivity (last active 30012 ms ago): trimming memory
Dec  1 09:47:38 compute-0 nova_compute[189491]: 2025-12-01 09:47:38.250 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:47:39 compute-0 nova_compute[189491]: 2025-12-01 09:47:39.736 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:47:39 compute-0 nova_compute[189491]: 2025-12-01 09:47:39.737 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 09:47:39 compute-0 nova_compute[189491]: 2025-12-01 09:47:39.753 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  1 09:47:41 compute-0 nova_compute[189491]: 2025-12-01 09:47:41.313 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:47:43 compute-0 nova_compute[189491]: 2025-12-01 09:47:43.255 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:47:43 compute-0 podman[255738]: 2025-12-01 09:47:43.701407034 +0000 UTC m=+0.072767028 container health_status 110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.expose-services=, config_id=edpm, vendor=Red Hat, Inc., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, io.openshift.tags=minimal rhel9, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64)
Dec  1 09:47:43 compute-0 podman[255739]: 2025-12-01 09:47:43.722334945 +0000 UTC m=+0.093426713 container health_status f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 09:47:46 compute-0 nova_compute[189491]: 2025-12-01 09:47:46.317 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:47:46 compute-0 podman[255777]: 2025-12-01 09:47:46.740195938 +0000 UTC m=+0.101246674 container health_status 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 09:47:46 compute-0 podman[255778]: 2025-12-01 09:47:46.787637766 +0000 UTC m=+0.150144397 container health_status 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 09:47:48 compute-0 nova_compute[189491]: 2025-12-01 09:47:48.260 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:47:48 compute-0 nova_compute[189491]: 2025-12-01 09:47:48.713 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:47:48 compute-0 nova_compute[189491]: 2025-12-01 09:47:48.743 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:47:48 compute-0 nova_compute[189491]: 2025-12-01 09:47:48.744 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:47:48 compute-0 nova_compute[189491]: 2025-12-01 09:47:48.744 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:47:48 compute-0 nova_compute[189491]: 2025-12-01 09:47:48.745 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 09:47:48 compute-0 nova_compute[189491]: 2025-12-01 09:47:48.824 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:47:48 compute-0 nova_compute[189491]: 2025-12-01 09:47:48.887 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:47:48 compute-0 nova_compute[189491]: 2025-12-01 09:47:48.888 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:47:48 compute-0 nova_compute[189491]: 2025-12-01 09:47:48.954 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:47:49 compute-0 nova_compute[189491]: 2025-12-01 09:47:49.283 189495 WARNING nova.virt.libvirt.driver [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 09:47:49 compute-0 nova_compute[189491]: 2025-12-01 09:47:49.285 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5172MB free_disk=72.27721786499023GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 09:47:49 compute-0 nova_compute[189491]: 2025-12-01 09:47:49.286 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:47:49 compute-0 nova_compute[189491]: 2025-12-01 09:47:49.286 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:47:49 compute-0 nova_compute[189491]: 2025-12-01 09:47:49.356 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Instance dc0d510c-4baf-4bcb-ab4f-de6ee48849c0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 09:47:49 compute-0 nova_compute[189491]: 2025-12-01 09:47:49.357 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 09:47:49 compute-0 nova_compute[189491]: 2025-12-01 09:47:49.357 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=79GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 09:47:49 compute-0 nova_compute[189491]: 2025-12-01 09:47:49.372 189495 DEBUG nova.scheduler.client.report [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Refreshing inventories for resource provider 143c7fe7-af1f-477a-978c-6a994d785d98 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  1 09:47:49 compute-0 nova_compute[189491]: 2025-12-01 09:47:49.390 189495 DEBUG nova.scheduler.client.report [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Updating ProviderTree inventory for provider 143c7fe7-af1f-477a-978c-6a994d785d98 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  1 09:47:49 compute-0 nova_compute[189491]: 2025-12-01 09:47:49.390 189495 DEBUG nova.compute.provider_tree [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Updating inventory in ProviderTree for provider 143c7fe7-af1f-477a-978c-6a994d785d98 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  1 09:47:49 compute-0 nova_compute[189491]: 2025-12-01 09:47:49.403 189495 DEBUG nova.scheduler.client.report [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Refreshing aggregate associations for resource provider 143c7fe7-af1f-477a-978c-6a994d785d98, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  1 09:47:49 compute-0 nova_compute[189491]: 2025-12-01 09:47:49.431 189495 DEBUG nova.scheduler.client.report [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Refreshing trait associations for resource provider 143c7fe7-af1f-477a-978c-6a994d785d98, traits: COMPUTE_STORAGE_BUS_FDC,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_FMA3,HW_CPU_X86_SVM,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_AVX,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SHA,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_AVX2,HW_CPU_X86_ABM,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_USB,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_MMX,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE2,COMPUTE_ACCELERATORS,HW_CPU_X86_F16C,HW_CPU_X86_SSE42,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE41,HW_CPU_X86_BMI2,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE4A,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_CIRRUS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  1 09:47:49 compute-0 nova_compute[189491]: 2025-12-01 09:47:49.474 189495 DEBUG nova.compute.provider_tree [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Inventory has not changed in ProviderTree for provider: 143c7fe7-af1f-477a-978c-6a994d785d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 09:47:49 compute-0 nova_compute[189491]: 2025-12-01 09:47:49.492 189495 DEBUG nova.scheduler.client.report [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Inventory has not changed for provider 143c7fe7-af1f-477a-978c-6a994d785d98 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 09:47:49 compute-0 nova_compute[189491]: 2025-12-01 09:47:49.521 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 09:47:49 compute-0 nova_compute[189491]: 2025-12-01 09:47:49.522 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.235s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:47:50 compute-0 nova_compute[189491]: 2025-12-01 09:47:50.523 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:47:51 compute-0 nova_compute[189491]: 2025-12-01 09:47:51.321 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:47:51 compute-0 nova_compute[189491]: 2025-12-01 09:47:51.709 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:47:51 compute-0 nova_compute[189491]: 2025-12-01 09:47:51.713 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:47:52 compute-0 nova_compute[189491]: 2025-12-01 09:47:52.714 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:47:52 compute-0 nova_compute[189491]: 2025-12-01 09:47:52.715 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 09:47:52 compute-0 nova_compute[189491]: 2025-12-01 09:47:52.715 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:47:52 compute-0 nova_compute[189491]: 2025-12-01 09:47:52.716 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec  1 09:47:52 compute-0 nova_compute[189491]: 2025-12-01 09:47:52.736 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec  1 09:47:53 compute-0 nova_compute[189491]: 2025-12-01 09:47:53.262 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:47:54 compute-0 nova_compute[189491]: 2025-12-01 09:47:54.737 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:47:55 compute-0 nova_compute[189491]: 2025-12-01 09:47:55.714 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:47:56 compute-0 nova_compute[189491]: 2025-12-01 09:47:56.324 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:47:56 compute-0 nova_compute[189491]: 2025-12-01 09:47:56.714 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:47:57 compute-0 podman[255831]: 2025-12-01 09:47:57.70063704 +0000 UTC m=+0.072942952 container health_status ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, container_name=ceilometer_agent_compute)
Dec  1 09:47:57 compute-0 podman[255830]: 2025-12-01 09:47:57.706742729 +0000 UTC m=+0.084003702 container health_status 6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  1 09:47:57 compute-0 nova_compute[189491]: 2025-12-01 09:47:57.714 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:47:58 compute-0 nova_compute[189491]: 2025-12-01 09:47:58.264 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:47:59 compute-0 nova_compute[189491]: 2025-12-01 09:47:59.724 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:47:59 compute-0 podman[203700]: time="2025-12-01T09:47:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 09:47:59 compute-0 podman[203700]: @ - - [01/Dec/2025:09:47:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Dec  1 09:47:59 compute-0 podman[203700]: @ - - [01/Dec/2025:09:47:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4804 "" "Go-http-client/1.1"
Dec  1 09:48:01 compute-0 nova_compute[189491]: 2025-12-01 09:48:01.327 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:48:01 compute-0 openstack_network_exporter[205866]: ERROR   09:48:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:48:01 compute-0 openstack_network_exporter[205866]: ERROR   09:48:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:48:01 compute-0 openstack_network_exporter[205866]: ERROR   09:48:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 09:48:01 compute-0 openstack_network_exporter[205866]: ERROR   09:48:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 09:48:01 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:48:01 compute-0 openstack_network_exporter[205866]: ERROR   09:48:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 09:48:01 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:48:03 compute-0 nova_compute[189491]: 2025-12-01 09:48:03.268 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:48:06 compute-0 nova_compute[189491]: 2025-12-01 09:48:06.330 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:48:06 compute-0 podman[255873]: 2025-12-01 09:48:06.697272487 +0000 UTC m=+0.073141736 container health_status dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  1 09:48:06 compute-0 podman[255874]: 2025-12-01 09:48:06.726552792 +0000 UTC m=+0.095286697 container health_status e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, config_id=edpm, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  1 09:48:06 compute-0 podman[255875]: 2025-12-01 09:48:06.775640851 +0000 UTC m=+0.130052967 container health_status f2d8639519b361376780abb83cb5a5e341c4f412720fb5852b2a4d261a69c359 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, release=1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, release-0.7.12=, io.openshift.tags=base rhel9, version=9.4, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, container_name=kepler, vendor=Red Hat, Inc., io.openshift.expose-services=)
Dec  1 09:48:08 compute-0 nova_compute[189491]: 2025-12-01 09:48:08.271 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:48:11 compute-0 nova_compute[189491]: 2025-12-01 09:48:11.334 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:48:13 compute-0 nova_compute[189491]: 2025-12-01 09:48:13.272 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:48:14 compute-0 podman[255932]: 2025-12-01 09:48:14.715479963 +0000 UTC m=+0.079008851 container health_status 110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., io.buildah.version=1.33.7, io.openshift.expose-services=, release=1755695350, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., vcs-type=git, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers)
Dec  1 09:48:14 compute-0 podman[255933]: 2025-12-01 09:48:14.718068927 +0000 UTC m=+0.073188679 container health_status f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 09:48:15 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Dec  1 09:48:16 compute-0 nova_compute[189491]: 2025-12-01 09:48:16.338 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:48:17 compute-0 podman[255972]: 2025-12-01 09:48:17.718822962 +0000 UTC m=+0.084559827 container health_status 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, config_id=multipathd)
Dec  1 09:48:17 compute-0 podman[255973]: 2025-12-01 09:48:17.780054257 +0000 UTC m=+0.141983819 container health_status 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible)
Dec  1 09:48:18 compute-0 nova_compute[189491]: 2025-12-01 09:48:18.275 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.795 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.796 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.796 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b020>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.796 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7ff84c98b0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.797 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84dc55100>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.797 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.797 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.797 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.797 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84ca1c260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.797 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.797 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.798 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84ff01af0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.798 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bb30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.798 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.798 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bb60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.798 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.798 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bbc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.798 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.798 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.798 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.798 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.799 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b5f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.799 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.799 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98be60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.799 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84f216690>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.799 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c9896d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.799 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.799 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98a720>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.800 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.806 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'dc0d510c-4baf-4bcb-ab4f-de6ee48849c0', 'name': 'te-8664732-asg-zzzrimsgcaeu-gnecnnuukmep-lujrpewlzjs2', 'flavor': {'id': '422f041c-a187-4aa2-8167-37f3eb0e89c2', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '280f4e4d-4a12-4164-a687-6106a9afc7fe'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000b', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '6d5294cc5ac64b22a4a0f770b8d8bc61', 'user_id': 'c54f3a4a232b4a739be88e97f2094d4f', 'hostId': 'b9c6fdac1e98b24aca6852a4c44644f8d936ac2e3843f1f4b4c15406', 'status': 'active', 'metadata': {'metering.server_group': 'e03937ad-4d2d-4edc-9b33-ed8d878566ca'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.806 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.806 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b020>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.806 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b020>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.807 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.807 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-01T09:48:19.806929) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.848 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.device.read.bytes volume: 30153728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.849 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.849 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.849 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7ff8501e1d00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.849 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.849 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84dc55100>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.849 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84dc55100>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.849 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.850 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-01T09:48:19.849849) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.864 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.device.allocation volume: 30154752 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.864 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.864 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.865 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7ff84c98b110>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.865 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.865 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b140>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.865 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b140>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.865 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.865 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.device.read.latency volume: 537631881 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.865 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.device.read.latency volume: 54970899 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.866 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.866 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7ff84c98b1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.866 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.866 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.866 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.866 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.866 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.device.usage volume: 29884416 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.866 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.867 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.867 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7ff84c98b200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.867 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.867 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.867 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.867 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.868 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.device.write.bytes volume: 72884224 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.868 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.868 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.868 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7ff84ca1c230>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.868 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.868 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84ca1c260>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.868 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84ca1c260>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.868 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.868 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-01T09:48:19.865417) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.869 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-01T09:48:19.866532) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.870 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-01T09:48:19.867860) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.870 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-01T09:48:19.868825) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.891 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.891 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.891 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7ff84c98b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.891 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.891 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.892 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.892 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.892 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.device.write.latency volume: 3026166253 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.892 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.892 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.892 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7ff84c98b2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.892 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.893 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.893 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.893 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.893 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.device.write.requests volume: 312 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.893 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.893 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.893 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7ff84c98b620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.894 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.894 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84ff01af0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.894 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84ff01af0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.894 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.894 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-01T09:48:19.892095) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.895 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-01T09:48:19.893134) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.895 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-01T09:48:19.894404) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.899 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/network.incoming.bytes volume: 1352 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.900 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.900 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7ff84c98b680>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.900 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.900 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7ff84c98b320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.900 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.900 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.900 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.900 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.900 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.900 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7ff84c98b920>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.900 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.900 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bb60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.901 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bb60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.901 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.901 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/network.incoming.packets volume: 9 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.901 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.901 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7ff84c98b380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.901 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.901 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b3b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.901 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b3b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.901 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.902 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.902 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7ff84c98bb90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.902 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.902 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bbc0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.902 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bbc0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.902 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.902 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.902 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.903 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7ff84c98bbf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.903 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.903 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bc20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.903 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bc20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.903 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.903 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.902 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-01T09:48:19.900483) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.903 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.903 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7ff84c98bc80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.903 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.904 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bcb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.904 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bcb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.904 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.904 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.904 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.904 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7ff84c98bd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.904 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.904 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bd40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.904 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bd40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.905 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.905 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.904 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-01T09:48:19.901103) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.905 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.905 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7ff84c98bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.905 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.905 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7ff84c98b5c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.905 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.906 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b5f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.906 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b5f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.906 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.906 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/memory.usage volume: 43.40625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.906 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.906 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7ff84dc55040>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.906 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.906 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b650>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.907 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b650>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.907 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.907 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.907 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.907 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7ff84c98be30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.907 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.907 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98be60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.907 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98be60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.907 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.907 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.908 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.908 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7ff8503b1b80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.908 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.908 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84f216690>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.908 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84f216690>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.908 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.908 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/cpu volume: 190670000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.908 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.908 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7ff84dab3f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.909 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.909 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c9896d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.909 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c9896d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.909 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.909 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.909 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.909 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.910 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7ff84c98bec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.910 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.910 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bef0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.910 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bef0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.910 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.910 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.910 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.910 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7ff84c98b170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.910 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.910 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98a720>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.910 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98a720>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.911 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.911 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.device.read.requests volume: 1094 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.911 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.911 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.911 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7ff84c98bf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.912 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.912 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bf80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.912 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bf80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.912 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.912 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.912 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.912 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.913 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.913 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.912 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-01T09:48:19.901902) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.914 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.914 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-01T09:48:19.902559) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.914 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.915 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-01T09:48:19.903331) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.915 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.915 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-01T09:48:19.904194) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.916 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.916 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-01T09:48:19.905066) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.916 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.917 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-01T09:48:19.906206) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.917 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.917 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-01T09:48:19.907065) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.918 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.918 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-01T09:48:19.907790) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.918 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.918 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-01T09:48:19.908489) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.919 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.919 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-01T09:48:19.909253) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.920 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.920 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-01T09:48:19.910337) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.920 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.921 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-01T09:48:19.911038) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.921 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.921 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.921 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-01T09:48:19.912223) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.922 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.922 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.922 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.923 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.923 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.923 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.924 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.924 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.924 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:48:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:48:19.925 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:48:21 compute-0 nova_compute[189491]: 2025-12-01 09:48:21.341 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:48:23 compute-0 nova_compute[189491]: 2025-12-01 09:48:23.278 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:48:26 compute-0 nova_compute[189491]: 2025-12-01 09:48:26.345 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:48:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:48:26.540 106659 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:48:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:48:26.541 106659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:48:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:48:26.542 106659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:48:28 compute-0 nova_compute[189491]: 2025-12-01 09:48:28.282 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:48:28 compute-0 podman[256017]: 2025-12-01 09:48:28.724876834 +0000 UTC m=+0.092929301 container health_status 6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  1 09:48:28 compute-0 podman[256018]: 2025-12-01 09:48:28.729592419 +0000 UTC m=+0.090600114 container health_status ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.4, managed_by=edpm_ansible)
Dec  1 09:48:29 compute-0 podman[203700]: time="2025-12-01T09:48:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 09:48:29 compute-0 podman[203700]: @ - - [01/Dec/2025:09:48:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Dec  1 09:48:29 compute-0 podman[203700]: @ - - [01/Dec/2025:09:48:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4814 "" "Go-http-client/1.1"
Dec  1 09:48:31 compute-0 nova_compute[189491]: 2025-12-01 09:48:31.111 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:48:31 compute-0 nova_compute[189491]: 2025-12-01 09:48:31.144 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Triggering sync for uuid dc0d510c-4baf-4bcb-ab4f-de6ee48849c0 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Dec  1 09:48:31 compute-0 nova_compute[189491]: 2025-12-01 09:48:31.144 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "dc0d510c-4baf-4bcb-ab4f-de6ee48849c0" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:48:31 compute-0 nova_compute[189491]: 2025-12-01 09:48:31.145 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "dc0d510c-4baf-4bcb-ab4f-de6ee48849c0" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:48:31 compute-0 nova_compute[189491]: 2025-12-01 09:48:31.179 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "dc0d510c-4baf-4bcb-ab4f-de6ee48849c0" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.034s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:48:31 compute-0 nova_compute[189491]: 2025-12-01 09:48:31.348 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:48:31 compute-0 openstack_network_exporter[205866]: ERROR   09:48:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:48:31 compute-0 openstack_network_exporter[205866]: ERROR   09:48:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:48:31 compute-0 openstack_network_exporter[205866]: ERROR   09:48:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 09:48:31 compute-0 openstack_network_exporter[205866]: ERROR   09:48:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 09:48:31 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:48:31 compute-0 openstack_network_exporter[205866]: ERROR   09:48:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 09:48:31 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:48:33 compute-0 nova_compute[189491]: 2025-12-01 09:48:33.285 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:48:36 compute-0 nova_compute[189491]: 2025-12-01 09:48:36.352 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:48:37 compute-0 podman[256057]: 2025-12-01 09:48:37.724815672 +0000 UTC m=+0.082065495 container health_status e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, config_id=edpm, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi)
Dec  1 09:48:37 compute-0 podman[256056]: 2025-12-01 09:48:37.727344823 +0000 UTC m=+0.087290972 container health_status dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  1 09:48:37 compute-0 podman[256058]: 2025-12-01 09:48:37.77633981 +0000 UTC m=+0.131260567 container health_status f2d8639519b361376780abb83cb5a5e341c4f412720fb5852b2a4d261a69c359 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., name=ubi9, vcs-type=git, version=9.4, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, release=1214.1726694543, distribution-scope=public, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, managed_by=edpm_ansible)
Dec  1 09:48:38 compute-0 nova_compute[189491]: 2025-12-01 09:48:38.287 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:48:39 compute-0 nova_compute[189491]: 2025-12-01 09:48:39.747 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:48:39 compute-0 nova_compute[189491]: 2025-12-01 09:48:39.748 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 09:48:39 compute-0 nova_compute[189491]: 2025-12-01 09:48:39.749 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 09:48:40 compute-0 nova_compute[189491]: 2025-12-01 09:48:40.884 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "refresh_cache-dc0d510c-4baf-4bcb-ab4f-de6ee48849c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 09:48:40 compute-0 nova_compute[189491]: 2025-12-01 09:48:40.885 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquired lock "refresh_cache-dc0d510c-4baf-4bcb-ab4f-de6ee48849c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 09:48:40 compute-0 nova_compute[189491]: 2025-12-01 09:48:40.885 189495 DEBUG nova.network.neutron [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] [instance: dc0d510c-4baf-4bcb-ab4f-de6ee48849c0] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  1 09:48:40 compute-0 nova_compute[189491]: 2025-12-01 09:48:40.886 189495 DEBUG nova.objects.instance [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lazy-loading 'info_cache' on Instance uuid dc0d510c-4baf-4bcb-ab4f-de6ee48849c0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 09:48:41 compute-0 nova_compute[189491]: 2025-12-01 09:48:41.355 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:48:43 compute-0 nova_compute[189491]: 2025-12-01 09:48:43.291 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:48:43 compute-0 nova_compute[189491]: 2025-12-01 09:48:43.936 189495 DEBUG nova.network.neutron [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] [instance: dc0d510c-4baf-4bcb-ab4f-de6ee48849c0] Updating instance_info_cache with network_info: [{"id": "e1536dee-e9fa-499f-9e7a-2b2a0ecce586", "address": "fa:16:3e:50:a8:e2", "network": {"id": "cf0577af-a5ed-496f-aa24-ae4d86898e85", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.156", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d5294cc5ac64b22a4a0f770b8d8bc61", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1536dee-e9", "ovs_interfaceid": "e1536dee-e9fa-499f-9e7a-2b2a0ecce586", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 09:48:43 compute-0 nova_compute[189491]: 2025-12-01 09:48:43.963 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Releasing lock "refresh_cache-dc0d510c-4baf-4bcb-ab4f-de6ee48849c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 09:48:43 compute-0 nova_compute[189491]: 2025-12-01 09:48:43.964 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] [instance: dc0d510c-4baf-4bcb-ab4f-de6ee48849c0] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  1 09:48:45 compute-0 podman[256114]: 2025-12-01 09:48:45.717197217 +0000 UTC m=+0.072849950 container health_status 110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, name=ubi9-minimal, architecture=x86_64, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release=1755695350, config_id=edpm, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., container_name=openstack_network_exporter, distribution-scope=public, vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Dec  1 09:48:45 compute-0 podman[256115]: 2025-12-01 09:48:45.735381261 +0000 UTC m=+0.099696726 container health_status f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec  1 09:48:46 compute-0 nova_compute[189491]: 2025-12-01 09:48:46.359 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:48:48 compute-0 nova_compute[189491]: 2025-12-01 09:48:48.293 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:48:48 compute-0 podman[256153]: 2025-12-01 09:48:48.693085035 +0000 UTC m=+0.067917999 container health_status 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  1 09:48:48 compute-0 nova_compute[189491]: 2025-12-01 09:48:48.714 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:48:48 compute-0 podman[256154]: 2025-12-01 09:48:48.745429514 +0000 UTC m=+0.113336499 container health_status 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 09:48:48 compute-0 nova_compute[189491]: 2025-12-01 09:48:48.743 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:48:48 compute-0 nova_compute[189491]: 2025-12-01 09:48:48.743 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:48:48 compute-0 nova_compute[189491]: 2025-12-01 09:48:48.743 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:48:48 compute-0 nova_compute[189491]: 2025-12-01 09:48:48.744 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 09:48:48 compute-0 nova_compute[189491]: 2025-12-01 09:48:48.812 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:48:48 compute-0 nova_compute[189491]: 2025-12-01 09:48:48.870 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:48:48 compute-0 nova_compute[189491]: 2025-12-01 09:48:48.871 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:48:48 compute-0 nova_compute[189491]: 2025-12-01 09:48:48.931 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:48:49 compute-0 nova_compute[189491]: 2025-12-01 09:48:49.255 189495 WARNING nova.virt.libvirt.driver [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 09:48:49 compute-0 nova_compute[189491]: 2025-12-01 09:48:49.256 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5159MB free_disk=72.2771110534668GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 09:48:49 compute-0 nova_compute[189491]: 2025-12-01 09:48:49.256 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:48:49 compute-0 nova_compute[189491]: 2025-12-01 09:48:49.257 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:48:49 compute-0 nova_compute[189491]: 2025-12-01 09:48:49.414 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Instance dc0d510c-4baf-4bcb-ab4f-de6ee48849c0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 09:48:49 compute-0 nova_compute[189491]: 2025-12-01 09:48:49.415 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 09:48:49 compute-0 nova_compute[189491]: 2025-12-01 09:48:49.416 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=79GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 09:48:49 compute-0 nova_compute[189491]: 2025-12-01 09:48:49.549 189495 DEBUG nova.compute.provider_tree [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Inventory has not changed in ProviderTree for provider: 143c7fe7-af1f-477a-978c-6a994d785d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 09:48:49 compute-0 nova_compute[189491]: 2025-12-01 09:48:49.565 189495 DEBUG nova.scheduler.client.report [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Inventory has not changed for provider 143c7fe7-af1f-477a-978c-6a994d785d98 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 09:48:49 compute-0 nova_compute[189491]: 2025-12-01 09:48:49.567 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 09:48:49 compute-0 nova_compute[189491]: 2025-12-01 09:48:49.567 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.311s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:48:51 compute-0 nova_compute[189491]: 2025-12-01 09:48:51.362 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:48:51 compute-0 nova_compute[189491]: 2025-12-01 09:48:51.567 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:48:51 compute-0 nova_compute[189491]: 2025-12-01 09:48:51.708 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:48:52 compute-0 nova_compute[189491]: 2025-12-01 09:48:52.714 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:48:52 compute-0 nova_compute[189491]: 2025-12-01 09:48:52.715 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:48:52 compute-0 nova_compute[189491]: 2025-12-01 09:48:52.716 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 09:48:53 compute-0 nova_compute[189491]: 2025-12-01 09:48:53.296 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:48:55 compute-0 nova_compute[189491]: 2025-12-01 09:48:55.717 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:48:56 compute-0 nova_compute[189491]: 2025-12-01 09:48:56.366 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:48:57 compute-0 nova_compute[189491]: 2025-12-01 09:48:57.714 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:48:57 compute-0 nova_compute[189491]: 2025-12-01 09:48:57.715 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:48:58 compute-0 nova_compute[189491]: 2025-12-01 09:48:58.298 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:48:59 compute-0 podman[203700]: time="2025-12-01T09:48:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 09:48:59 compute-0 podman[203700]: @ - - [01/Dec/2025:09:48:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Dec  1 09:48:59 compute-0 podman[256205]: 2025-12-01 09:48:59.745278767 +0000 UTC m=+0.118741830 container health_status 6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  1 09:48:59 compute-0 podman[256206]: 2025-12-01 09:48:59.760589281 +0000 UTC m=+0.127587626 container health_status ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec  1 09:48:59 compute-0 podman[203700]: @ - - [01/Dec/2025:09:48:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4809 "" "Go-http-client/1.1"
Dec  1 09:49:01 compute-0 nova_compute[189491]: 2025-12-01 09:49:01.369 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:49:01 compute-0 openstack_network_exporter[205866]: ERROR   09:49:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:49:01 compute-0 openstack_network_exporter[205866]: ERROR   09:49:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:49:01 compute-0 openstack_network_exporter[205866]: ERROR   09:49:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 09:49:01 compute-0 openstack_network_exporter[205866]: ERROR   09:49:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 09:49:01 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:49:01 compute-0 openstack_network_exporter[205866]: ERROR   09:49:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 09:49:01 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:49:03 compute-0 nova_compute[189491]: 2025-12-01 09:49:03.300 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:49:06 compute-0 nova_compute[189491]: 2025-12-01 09:49:06.372 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:49:08 compute-0 nova_compute[189491]: 2025-12-01 09:49:08.303 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:49:08 compute-0 podman[256244]: 2025-12-01 09:49:08.706543351 +0000 UTC m=+0.076859818 container health_status dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  1 09:49:08 compute-0 podman[256245]: 2025-12-01 09:49:08.727194826 +0000 UTC m=+0.088728588 container health_status e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_ipmi)
Dec  1 09:49:08 compute-0 podman[256251]: 2025-12-01 09:49:08.744236202 +0000 UTC m=+0.096899207 container health_status f2d8639519b361376780abb83cb5a5e341c4f412720fb5852b2a4d261a69c359 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, architecture=x86_64, distribution-scope=public, release-0.7.12=, io.buildah.version=1.29.0, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-type=git, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.expose-services=, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, maintainer=Red Hat, Inc., version=9.4)
Dec  1 09:49:11 compute-0 nova_compute[189491]: 2025-12-01 09:49:11.376 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:49:13 compute-0 nova_compute[189491]: 2025-12-01 09:49:13.305 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:49:16 compute-0 nova_compute[189491]: 2025-12-01 09:49:16.378 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:49:16 compute-0 podman[256303]: 2025-12-01 09:49:16.699676165 +0000 UTC m=+0.077259048 container health_status 110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, io.buildah.version=1.33.7, release=1755695350, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, version=9.6, io.openshift.expose-services=, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, distribution-scope=public, managed_by=edpm_ansible)
Dec  1 09:49:16 compute-0 podman[256304]: 2025-12-01 09:49:16.713662186 +0000 UTC m=+0.086073733 container health_status f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  1 09:49:18 compute-0 nova_compute[189491]: 2025-12-01 09:49:18.308 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:49:19 compute-0 podman[256346]: 2025-12-01 09:49:19.739607736 +0000 UTC m=+0.099454389 container health_status 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 09:49:19 compute-0 podman[256347]: 2025-12-01 09:49:19.789955886 +0000 UTC m=+0.154892674 container health_status 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true)
Dec  1 09:49:21 compute-0 nova_compute[189491]: 2025-12-01 09:49:21.381 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:49:23 compute-0 nova_compute[189491]: 2025-12-01 09:49:23.309 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:49:26 compute-0 nova_compute[189491]: 2025-12-01 09:49:26.383 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:49:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:49:26.541 106659 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:49:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:49:26.541 106659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:49:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:49:26.542 106659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:49:28 compute-0 nova_compute[189491]: 2025-12-01 09:49:28.311 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:49:29 compute-0 podman[203700]: time="2025-12-01T09:49:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 09:49:29 compute-0 podman[203700]: @ - - [01/Dec/2025:09:49:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Dec  1 09:49:29 compute-0 podman[203700]: @ - - [01/Dec/2025:09:49:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4806 "" "Go-http-client/1.1"
Dec  1 09:49:30 compute-0 podman[256389]: 2025-12-01 09:49:30.708019253 +0000 UTC m=+0.076636243 container health_status ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  1 09:49:30 compute-0 podman[256388]: 2025-12-01 09:49:30.726843982 +0000 UTC m=+0.098623009 container health_status 6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  1 09:49:31 compute-0 nova_compute[189491]: 2025-12-01 09:49:31.387 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:49:31 compute-0 openstack_network_exporter[205866]: ERROR   09:49:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:49:31 compute-0 openstack_network_exporter[205866]: ERROR   09:49:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:49:31 compute-0 openstack_network_exporter[205866]: ERROR   09:49:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 09:49:31 compute-0 openstack_network_exporter[205866]: ERROR   09:49:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 09:49:31 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:49:31 compute-0 openstack_network_exporter[205866]: ERROR   09:49:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 09:49:31 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:49:33 compute-0 nova_compute[189491]: 2025-12-01 09:49:33.314 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:49:36 compute-0 nova_compute[189491]: 2025-12-01 09:49:36.389 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:49:38 compute-0 nova_compute[189491]: 2025-12-01 09:49:38.316 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:49:39 compute-0 podman[256429]: 2025-12-01 09:49:39.751196377 +0000 UTC m=+0.113735739 container health_status dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  1 09:49:39 compute-0 podman[256430]: 2025-12-01 09:49:39.755873371 +0000 UTC m=+0.115765708 container health_status e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi)
Dec  1 09:49:39 compute-0 podman[256431]: 2025-12-01 09:49:39.77545991 +0000 UTC m=+0.112996231 container health_status f2d8639519b361376780abb83cb5a5e341c4f412720fb5852b2a4d261a69c359 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.buildah.version=1.29.0, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, name=ubi9, release=1214.1726694543, container_name=kepler, release-0.7.12=, io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., vcs-type=git)
Dec  1 09:49:40 compute-0 nova_compute[189491]: 2025-12-01 09:49:40.715 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:49:40 compute-0 nova_compute[189491]: 2025-12-01 09:49:40.716 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 09:49:40 compute-0 nova_compute[189491]: 2025-12-01 09:49:40.716 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 09:49:41 compute-0 nova_compute[189491]: 2025-12-01 09:49:41.392 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:49:41 compute-0 nova_compute[189491]: 2025-12-01 09:49:41.887 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "refresh_cache-dc0d510c-4baf-4bcb-ab4f-de6ee48849c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 09:49:41 compute-0 nova_compute[189491]: 2025-12-01 09:49:41.888 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquired lock "refresh_cache-dc0d510c-4baf-4bcb-ab4f-de6ee48849c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 09:49:41 compute-0 nova_compute[189491]: 2025-12-01 09:49:41.888 189495 DEBUG nova.network.neutron [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] [instance: dc0d510c-4baf-4bcb-ab4f-de6ee48849c0] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  1 09:49:41 compute-0 nova_compute[189491]: 2025-12-01 09:49:41.889 189495 DEBUG nova.objects.instance [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lazy-loading 'info_cache' on Instance uuid dc0d510c-4baf-4bcb-ab4f-de6ee48849c0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 09:49:43 compute-0 nova_compute[189491]: 2025-12-01 09:49:43.318 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:49:44 compute-0 nova_compute[189491]: 2025-12-01 09:49:44.169 189495 DEBUG nova.network.neutron [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] [instance: dc0d510c-4baf-4bcb-ab4f-de6ee48849c0] Updating instance_info_cache with network_info: [{"id": "e1536dee-e9fa-499f-9e7a-2b2a0ecce586", "address": "fa:16:3e:50:a8:e2", "network": {"id": "cf0577af-a5ed-496f-aa24-ae4d86898e85", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.156", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d5294cc5ac64b22a4a0f770b8d8bc61", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1536dee-e9", "ovs_interfaceid": "e1536dee-e9fa-499f-9e7a-2b2a0ecce586", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 09:49:44 compute-0 nova_compute[189491]: 2025-12-01 09:49:44.187 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Releasing lock "refresh_cache-dc0d510c-4baf-4bcb-ab4f-de6ee48849c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 09:49:44 compute-0 nova_compute[189491]: 2025-12-01 09:49:44.188 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] [instance: dc0d510c-4baf-4bcb-ab4f-de6ee48849c0] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  1 09:49:46 compute-0 nova_compute[189491]: 2025-12-01 09:49:46.396 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:49:47 compute-0 podman[256494]: 2025-12-01 09:49:47.692802473 +0000 UTC m=+0.066803023 container health_status 110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, io.openshift.expose-services=, io.buildah.version=1.33.7, version=9.6, managed_by=edpm_ansible, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Dec  1 09:49:47 compute-0 podman[256495]: 2025-12-01 09:49:47.694906144 +0000 UTC m=+0.065757057 container health_status f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  1 09:49:48 compute-0 nova_compute[189491]: 2025-12-01 09:49:48.322 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:49:50 compute-0 podman[256533]: 2025-12-01 09:49:50.688837192 +0000 UTC m=+0.065808458 container health_status 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  1 09:49:50 compute-0 nova_compute[189491]: 2025-12-01 09:49:50.713 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:49:50 compute-0 podman[256534]: 2025-12-01 09:49:50.734048587 +0000 UTC m=+0.105517688 container health_status 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ovn_controller, tcib_managed=true)
Dec  1 09:49:50 compute-0 nova_compute[189491]: 2025-12-01 09:49:50.745 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:49:50 compute-0 nova_compute[189491]: 2025-12-01 09:49:50.746 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:49:50 compute-0 nova_compute[189491]: 2025-12-01 09:49:50.746 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:49:50 compute-0 nova_compute[189491]: 2025-12-01 09:49:50.747 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 09:49:50 compute-0 nova_compute[189491]: 2025-12-01 09:49:50.815 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:49:50 compute-0 nova_compute[189491]: 2025-12-01 09:49:50.886 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:49:50 compute-0 nova_compute[189491]: 2025-12-01 09:49:50.887 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:49:50 compute-0 nova_compute[189491]: 2025-12-01 09:49:50.944 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:49:51 compute-0 nova_compute[189491]: 2025-12-01 09:49:51.267 189495 WARNING nova.virt.libvirt.driver [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 09:49:51 compute-0 nova_compute[189491]: 2025-12-01 09:49:51.268 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5070MB free_disk=72.27721405029297GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 09:49:51 compute-0 nova_compute[189491]: 2025-12-01 09:49:51.269 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:49:51 compute-0 nova_compute[189491]: 2025-12-01 09:49:51.269 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:49:51 compute-0 nova_compute[189491]: 2025-12-01 09:49:51.354 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Instance dc0d510c-4baf-4bcb-ab4f-de6ee48849c0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 09:49:51 compute-0 nova_compute[189491]: 2025-12-01 09:49:51.355 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 09:49:51 compute-0 nova_compute[189491]: 2025-12-01 09:49:51.356 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=79GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 09:49:51 compute-0 nova_compute[189491]: 2025-12-01 09:49:51.399 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:49:51 compute-0 nova_compute[189491]: 2025-12-01 09:49:51.406 189495 DEBUG nova.compute.provider_tree [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Inventory has not changed in ProviderTree for provider: 143c7fe7-af1f-477a-978c-6a994d785d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 09:49:51 compute-0 nova_compute[189491]: 2025-12-01 09:49:51.422 189495 DEBUG nova.scheduler.client.report [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Inventory has not changed for provider 143c7fe7-af1f-477a-978c-6a994d785d98 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 09:49:51 compute-0 nova_compute[189491]: 2025-12-01 09:49:51.423 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 09:49:51 compute-0 nova_compute[189491]: 2025-12-01 09:49:51.424 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.154s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:49:52 compute-0 nova_compute[189491]: 2025-12-01 09:49:52.424 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:49:52 compute-0 nova_compute[189491]: 2025-12-01 09:49:52.708 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:49:53 compute-0 nova_compute[189491]: 2025-12-01 09:49:53.322 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:49:53 compute-0 nova_compute[189491]: 2025-12-01 09:49:53.713 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:49:54 compute-0 nova_compute[189491]: 2025-12-01 09:49:54.714 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:49:54 compute-0 nova_compute[189491]: 2025-12-01 09:49:54.715 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 09:49:56 compute-0 nova_compute[189491]: 2025-12-01 09:49:56.401 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:49:57 compute-0 nova_compute[189491]: 2025-12-01 09:49:57.714 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:49:57 compute-0 nova_compute[189491]: 2025-12-01 09:49:57.715 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:49:58 compute-0 nova_compute[189491]: 2025-12-01 09:49:58.324 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:49:59 compute-0 nova_compute[189491]: 2025-12-01 09:49:59.329 189495 DEBUG oslo_concurrency.lockutils [None req-0b7d50fc-15d2-4e07-ac8b-bf292e7d2152 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Acquiring lock "be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:49:59 compute-0 nova_compute[189491]: 2025-12-01 09:49:59.329 189495 DEBUG oslo_concurrency.lockutils [None req-0b7d50fc-15d2-4e07-ac8b-bf292e7d2152 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Lock "be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:49:59 compute-0 nova_compute[189491]: 2025-12-01 09:49:59.351 189495 DEBUG nova.compute.manager [None req-0b7d50fc-15d2-4e07-ac8b-bf292e7d2152 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] [instance: be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  1 09:49:59 compute-0 nova_compute[189491]: 2025-12-01 09:49:59.419 189495 DEBUG oslo_concurrency.lockutils [None req-0b7d50fc-15d2-4e07-ac8b-bf292e7d2152 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:49:59 compute-0 nova_compute[189491]: 2025-12-01 09:49:59.420 189495 DEBUG oslo_concurrency.lockutils [None req-0b7d50fc-15d2-4e07-ac8b-bf292e7d2152 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:49:59 compute-0 nova_compute[189491]: 2025-12-01 09:49:59.428 189495 DEBUG nova.virt.hardware [None req-0b7d50fc-15d2-4e07-ac8b-bf292e7d2152 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  1 09:49:59 compute-0 nova_compute[189491]: 2025-12-01 09:49:59.428 189495 INFO nova.compute.claims [None req-0b7d50fc-15d2-4e07-ac8b-bf292e7d2152 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] [instance: be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  1 09:49:59 compute-0 nova_compute[189491]: 2025-12-01 09:49:59.579 189495 DEBUG nova.compute.provider_tree [None req-0b7d50fc-15d2-4e07-ac8b-bf292e7d2152 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Inventory has not changed in ProviderTree for provider: 143c7fe7-af1f-477a-978c-6a994d785d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 09:49:59 compute-0 nova_compute[189491]: 2025-12-01 09:49:59.598 189495 DEBUG nova.scheduler.client.report [None req-0b7d50fc-15d2-4e07-ac8b-bf292e7d2152 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Inventory has not changed for provider 143c7fe7-af1f-477a-978c-6a994d785d98 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 09:49:59 compute-0 nova_compute[189491]: 2025-12-01 09:49:59.639 189495 DEBUG oslo_concurrency.lockutils [None req-0b7d50fc-15d2-4e07-ac8b-bf292e7d2152 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.219s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:49:59 compute-0 nova_compute[189491]: 2025-12-01 09:49:59.640 189495 DEBUG nova.compute.manager [None req-0b7d50fc-15d2-4e07-ac8b-bf292e7d2152 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] [instance: be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  1 09:49:59 compute-0 nova_compute[189491]: 2025-12-01 09:49:59.691 189495 DEBUG nova.compute.manager [None req-0b7d50fc-15d2-4e07-ac8b-bf292e7d2152 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] [instance: be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  1 09:49:59 compute-0 nova_compute[189491]: 2025-12-01 09:49:59.691 189495 DEBUG nova.network.neutron [None req-0b7d50fc-15d2-4e07-ac8b-bf292e7d2152 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] [instance: be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  1 09:49:59 compute-0 nova_compute[189491]: 2025-12-01 09:49:59.708 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:49:59 compute-0 podman[203700]: time="2025-12-01T09:49:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 09:49:59 compute-0 podman[203700]: @ - - [01/Dec/2025:09:49:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Dec  1 09:49:59 compute-0 nova_compute[189491]: 2025-12-01 09:49:59.744 189495 INFO nova.virt.libvirt.driver [None req-0b7d50fc-15d2-4e07-ac8b-bf292e7d2152 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] [instance: be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  1 09:49:59 compute-0 podman[203700]: @ - - [01/Dec/2025:09:49:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4809 "" "Go-http-client/1.1"
Dec  1 09:49:59 compute-0 nova_compute[189491]: 2025-12-01 09:49:59.773 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:49:59 compute-0 nova_compute[189491]: 2025-12-01 09:49:59.989 189495 DEBUG nova.compute.manager [None req-0b7d50fc-15d2-4e07-ac8b-bf292e7d2152 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] [instance: be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  1 09:50:00 compute-0 nova_compute[189491]: 2025-12-01 09:50:00.146 189495 DEBUG nova.compute.manager [None req-0b7d50fc-15d2-4e07-ac8b-bf292e7d2152 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] [instance: be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  1 09:50:00 compute-0 nova_compute[189491]: 2025-12-01 09:50:00.148 189495 DEBUG nova.virt.libvirt.driver [None req-0b7d50fc-15d2-4e07-ac8b-bf292e7d2152 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] [instance: be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  1 09:50:00 compute-0 nova_compute[189491]: 2025-12-01 09:50:00.149 189495 INFO nova.virt.libvirt.driver [None req-0b7d50fc-15d2-4e07-ac8b-bf292e7d2152 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] [instance: be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2] Creating image(s)#033[00m
Dec  1 09:50:00 compute-0 nova_compute[189491]: 2025-12-01 09:50:00.151 189495 DEBUG oslo_concurrency.lockutils [None req-0b7d50fc-15d2-4e07-ac8b-bf292e7d2152 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Acquiring lock "/var/lib/nova/instances/be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:50:00 compute-0 nova_compute[189491]: 2025-12-01 09:50:00.151 189495 DEBUG oslo_concurrency.lockutils [None req-0b7d50fc-15d2-4e07-ac8b-bf292e7d2152 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Lock "/var/lib/nova/instances/be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:50:00 compute-0 nova_compute[189491]: 2025-12-01 09:50:00.153 189495 DEBUG oslo_concurrency.lockutils [None req-0b7d50fc-15d2-4e07-ac8b-bf292e7d2152 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Lock "/var/lib/nova/instances/be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:50:00 compute-0 nova_compute[189491]: 2025-12-01 09:50:00.166 189495 DEBUG oslo_concurrency.processutils [None req-0b7d50fc-15d2-4e07-ac8b-bf292e7d2152 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/8b917e1e1f61d3c861f59bffbbb40426a7633e75 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:50:00 compute-0 nova_compute[189491]: 2025-12-01 09:50:00.221 189495 DEBUG oslo_concurrency.processutils [None req-0b7d50fc-15d2-4e07-ac8b-bf292e7d2152 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/8b917e1e1f61d3c861f59bffbbb40426a7633e75 --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:50:00 compute-0 nova_compute[189491]: 2025-12-01 09:50:00.222 189495 DEBUG oslo_concurrency.lockutils [None req-0b7d50fc-15d2-4e07-ac8b-bf292e7d2152 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Acquiring lock "8b917e1e1f61d3c861f59bffbbb40426a7633e75" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:50:00 compute-0 nova_compute[189491]: 2025-12-01 09:50:00.223 189495 DEBUG oslo_concurrency.lockutils [None req-0b7d50fc-15d2-4e07-ac8b-bf292e7d2152 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Lock "8b917e1e1f61d3c861f59bffbbb40426a7633e75" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:50:00 compute-0 nova_compute[189491]: 2025-12-01 09:50:00.235 189495 DEBUG oslo_concurrency.processutils [None req-0b7d50fc-15d2-4e07-ac8b-bf292e7d2152 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/8b917e1e1f61d3c861f59bffbbb40426a7633e75 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:50:00 compute-0 nova_compute[189491]: 2025-12-01 09:50:00.254 189495 DEBUG nova.policy [None req-0b7d50fc-15d2-4e07-ac8b-bf292e7d2152 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c54f3a4a232b4a739be88e97f2094d4f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6d5294cc5ac64b22a4a0f770b8d8bc61', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  1 09:50:00 compute-0 nova_compute[189491]: 2025-12-01 09:50:00.295 189495 DEBUG oslo_concurrency.processutils [None req-0b7d50fc-15d2-4e07-ac8b-bf292e7d2152 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/8b917e1e1f61d3c861f59bffbbb40426a7633e75 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:50:00 compute-0 nova_compute[189491]: 2025-12-01 09:50:00.296 189495 DEBUG oslo_concurrency.processutils [None req-0b7d50fc-15d2-4e07-ac8b-bf292e7d2152 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/8b917e1e1f61d3c861f59bffbbb40426a7633e75,backing_fmt=raw /var/lib/nova/instances/be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:50:00 compute-0 nova_compute[189491]: 2025-12-01 09:50:00.346 189495 DEBUG oslo_concurrency.processutils [None req-0b7d50fc-15d2-4e07-ac8b-bf292e7d2152 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/8b917e1e1f61d3c861f59bffbbb40426a7633e75,backing_fmt=raw /var/lib/nova/instances/be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk 1073741824" returned: 0 in 0.050s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:50:00 compute-0 nova_compute[189491]: 2025-12-01 09:50:00.347 189495 DEBUG oslo_concurrency.lockutils [None req-0b7d50fc-15d2-4e07-ac8b-bf292e7d2152 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Lock "8b917e1e1f61d3c861f59bffbbb40426a7633e75" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.124s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:50:00 compute-0 nova_compute[189491]: 2025-12-01 09:50:00.347 189495 DEBUG oslo_concurrency.processutils [None req-0b7d50fc-15d2-4e07-ac8b-bf292e7d2152 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/8b917e1e1f61d3c861f59bffbbb40426a7633e75 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:50:00 compute-0 nova_compute[189491]: 2025-12-01 09:50:00.403 189495 DEBUG oslo_concurrency.processutils [None req-0b7d50fc-15d2-4e07-ac8b-bf292e7d2152 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/8b917e1e1f61d3c861f59bffbbb40426a7633e75 --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:50:00 compute-0 nova_compute[189491]: 2025-12-01 09:50:00.404 189495 DEBUG nova.virt.disk.api [None req-0b7d50fc-15d2-4e07-ac8b-bf292e7d2152 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Checking if we can resize image /var/lib/nova/instances/be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Dec  1 09:50:00 compute-0 nova_compute[189491]: 2025-12-01 09:50:00.405 189495 DEBUG oslo_concurrency.processutils [None req-0b7d50fc-15d2-4e07-ac8b-bf292e7d2152 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:50:00 compute-0 nova_compute[189491]: 2025-12-01 09:50:00.462 189495 DEBUG oslo_concurrency.processutils [None req-0b7d50fc-15d2-4e07-ac8b-bf292e7d2152 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:50:00 compute-0 nova_compute[189491]: 2025-12-01 09:50:00.464 189495 DEBUG nova.virt.disk.api [None req-0b7d50fc-15d2-4e07-ac8b-bf292e7d2152 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Cannot resize image /var/lib/nova/instances/be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Dec  1 09:50:00 compute-0 nova_compute[189491]: 2025-12-01 09:50:00.464 189495 DEBUG nova.objects.instance [None req-0b7d50fc-15d2-4e07-ac8b-bf292e7d2152 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Lazy-loading 'migration_context' on Instance uuid be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 09:50:00 compute-0 nova_compute[189491]: 2025-12-01 09:50:00.490 189495 DEBUG nova.virt.libvirt.driver [None req-0b7d50fc-15d2-4e07-ac8b-bf292e7d2152 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] [instance: be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  1 09:50:00 compute-0 nova_compute[189491]: 2025-12-01 09:50:00.490 189495 DEBUG nova.virt.libvirt.driver [None req-0b7d50fc-15d2-4e07-ac8b-bf292e7d2152 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] [instance: be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2] Ensure instance console log exists: /var/lib/nova/instances/be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  1 09:50:00 compute-0 nova_compute[189491]: 2025-12-01 09:50:00.491 189495 DEBUG oslo_concurrency.lockutils [None req-0b7d50fc-15d2-4e07-ac8b-bf292e7d2152 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:50:00 compute-0 nova_compute[189491]: 2025-12-01 09:50:00.491 189495 DEBUG oslo_concurrency.lockutils [None req-0b7d50fc-15d2-4e07-ac8b-bf292e7d2152 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:50:00 compute-0 nova_compute[189491]: 2025-12-01 09:50:00.492 189495 DEBUG oslo_concurrency.lockutils [None req-0b7d50fc-15d2-4e07-ac8b-bf292e7d2152 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:50:00 compute-0 podman[256602]: 2025-12-01 09:50:00.965184889 +0000 UTC m=+0.072189504 container health_status ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true)
Dec  1 09:50:00 compute-0 podman[256601]: 2025-12-01 09:50:00.975677105 +0000 UTC m=+0.083024148 container health_status 6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  1 09:50:01 compute-0 nova_compute[189491]: 2025-12-01 09:50:01.246 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:50:01 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:50:01.247 106659 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=16, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:2b:76', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'f6:fe:a3:90:0a:20'}, ipsec=False) old=SB_Global(nb_cfg=15) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 09:50:01 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:50:01.248 106659 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  1 09:50:01 compute-0 nova_compute[189491]: 2025-12-01 09:50:01.381 189495 DEBUG nova.network.neutron [None req-0b7d50fc-15d2-4e07-ac8b-bf292e7d2152 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] [instance: be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2] Successfully created port: 01cbdc1d-a86f-411f-a8e1-8a4166f063d3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  1 09:50:01 compute-0 nova_compute[189491]: 2025-12-01 09:50:01.404 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:50:01 compute-0 openstack_network_exporter[205866]: ERROR   09:50:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:50:01 compute-0 openstack_network_exporter[205866]: ERROR   09:50:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:50:01 compute-0 openstack_network_exporter[205866]: ERROR   09:50:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 09:50:01 compute-0 openstack_network_exporter[205866]: ERROR   09:50:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 09:50:01 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:50:01 compute-0 openstack_network_exporter[205866]: ERROR   09:50:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 09:50:01 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:50:02 compute-0 nova_compute[189491]: 2025-12-01 09:50:02.944 189495 DEBUG nova.network.neutron [None req-0b7d50fc-15d2-4e07-ac8b-bf292e7d2152 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] [instance: be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2] Successfully updated port: 01cbdc1d-a86f-411f-a8e1-8a4166f063d3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  1 09:50:02 compute-0 nova_compute[189491]: 2025-12-01 09:50:02.961 189495 DEBUG oslo_concurrency.lockutils [None req-0b7d50fc-15d2-4e07-ac8b-bf292e7d2152 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Acquiring lock "refresh_cache-be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 09:50:02 compute-0 nova_compute[189491]: 2025-12-01 09:50:02.961 189495 DEBUG oslo_concurrency.lockutils [None req-0b7d50fc-15d2-4e07-ac8b-bf292e7d2152 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Acquired lock "refresh_cache-be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 09:50:02 compute-0 nova_compute[189491]: 2025-12-01 09:50:02.961 189495 DEBUG nova.network.neutron [None req-0b7d50fc-15d2-4e07-ac8b-bf292e7d2152 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] [instance: be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  1 09:50:03 compute-0 nova_compute[189491]: 2025-12-01 09:50:03.079 189495 DEBUG nova.compute.manager [req-3310046c-8be2-4311-a9b7-6a4a5e0e7ba3 req-7853cce3-070a-49ba-997c-f4d699cbe9bd ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2] Received event network-changed-01cbdc1d-a86f-411f-a8e1-8a4166f063d3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 09:50:03 compute-0 nova_compute[189491]: 2025-12-01 09:50:03.079 189495 DEBUG nova.compute.manager [req-3310046c-8be2-4311-a9b7-6a4a5e0e7ba3 req-7853cce3-070a-49ba-997c-f4d699cbe9bd ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2] Refreshing instance network info cache due to event network-changed-01cbdc1d-a86f-411f-a8e1-8a4166f063d3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  1 09:50:03 compute-0 nova_compute[189491]: 2025-12-01 09:50:03.080 189495 DEBUG oslo_concurrency.lockutils [req-3310046c-8be2-4311-a9b7-6a4a5e0e7ba3 req-7853cce3-070a-49ba-997c-f4d699cbe9bd ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Acquiring lock "refresh_cache-be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 09:50:03 compute-0 nova_compute[189491]: 2025-12-01 09:50:03.176 189495 DEBUG nova.network.neutron [None req-0b7d50fc-15d2-4e07-ac8b-bf292e7d2152 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] [instance: be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  1 09:50:03 compute-0 nova_compute[189491]: 2025-12-01 09:50:03.327 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:50:04 compute-0 nova_compute[189491]: 2025-12-01 09:50:04.962 189495 DEBUG nova.network.neutron [None req-0b7d50fc-15d2-4e07-ac8b-bf292e7d2152 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] [instance: be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2] Updating instance_info_cache with network_info: [{"id": "01cbdc1d-a86f-411f-a8e1-8a4166f063d3", "address": "fa:16:3e:37:35:95", "network": {"id": "cf0577af-a5ed-496f-aa24-ae4d86898e85", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.35", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d5294cc5ac64b22a4a0f770b8d8bc61", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01cbdc1d-a8", "ovs_interfaceid": "01cbdc1d-a86f-411f-a8e1-8a4166f063d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 09:50:04 compute-0 nova_compute[189491]: 2025-12-01 09:50:04.997 189495 DEBUG oslo_concurrency.lockutils [None req-0b7d50fc-15d2-4e07-ac8b-bf292e7d2152 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Releasing lock "refresh_cache-be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 09:50:04 compute-0 nova_compute[189491]: 2025-12-01 09:50:04.997 189495 DEBUG nova.compute.manager [None req-0b7d50fc-15d2-4e07-ac8b-bf292e7d2152 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] [instance: be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2] Instance network_info: |[{"id": "01cbdc1d-a86f-411f-a8e1-8a4166f063d3", "address": "fa:16:3e:37:35:95", "network": {"id": "cf0577af-a5ed-496f-aa24-ae4d86898e85", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.35", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d5294cc5ac64b22a4a0f770b8d8bc61", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01cbdc1d-a8", "ovs_interfaceid": "01cbdc1d-a86f-411f-a8e1-8a4166f063d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  1 09:50:04 compute-0 nova_compute[189491]: 2025-12-01 09:50:04.998 189495 DEBUG oslo_concurrency.lockutils [req-3310046c-8be2-4311-a9b7-6a4a5e0e7ba3 req-7853cce3-070a-49ba-997c-f4d699cbe9bd ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Acquired lock "refresh_cache-be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 09:50:04 compute-0 nova_compute[189491]: 2025-12-01 09:50:04.998 189495 DEBUG nova.network.neutron [req-3310046c-8be2-4311-a9b7-6a4a5e0e7ba3 req-7853cce3-070a-49ba-997c-f4d699cbe9bd ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2] Refreshing network info cache for port 01cbdc1d-a86f-411f-a8e1-8a4166f063d3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  1 09:50:05 compute-0 nova_compute[189491]: 2025-12-01 09:50:05.002 189495 DEBUG nova.virt.libvirt.driver [None req-0b7d50fc-15d2-4e07-ac8b-bf292e7d2152 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] [instance: be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2] Start _get_guest_xml network_info=[{"id": "01cbdc1d-a86f-411f-a8e1-8a4166f063d3", "address": "fa:16:3e:37:35:95", "network": {"id": "cf0577af-a5ed-496f-aa24-ae4d86898e85", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.35", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d5294cc5ac64b22a4a0f770b8d8bc61", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01cbdc1d-a8", "ovs_interfaceid": "01cbdc1d-a86f-411f-a8e1-8a4166f063d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-01T09:44:52Z,direct_url=<?>,disk_format='qcow2',id=280f4e4d-4a12-4164-a687-6106a9afc7fe,min_disk=0,min_ram=0,name='tempest-scenario-img--1642109444',owner='6d5294cc5ac64b22a4a0f770b8d8bc61',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-01T09:44:53Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'guest_format': None, 'encryption_options': None, 'device_type': 'disk', 'encryption_secret_uuid': None, 'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'image_id': '280f4e4d-4a12-4164-a687-6106a9afc7fe'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  1 09:50:05 compute-0 nova_compute[189491]: 2025-12-01 09:50:05.009 189495 WARNING nova.virt.libvirt.driver [None req-0b7d50fc-15d2-4e07-ac8b-bf292e7d2152 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 09:50:05 compute-0 nova_compute[189491]: 2025-12-01 09:50:05.015 189495 DEBUG nova.virt.libvirt.host [None req-0b7d50fc-15d2-4e07-ac8b-bf292e7d2152 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  1 09:50:05 compute-0 nova_compute[189491]: 2025-12-01 09:50:05.016 189495 DEBUG nova.virt.libvirt.host [None req-0b7d50fc-15d2-4e07-ac8b-bf292e7d2152 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  1 09:50:05 compute-0 nova_compute[189491]: 2025-12-01 09:50:05.025 189495 DEBUG nova.virt.libvirt.host [None req-0b7d50fc-15d2-4e07-ac8b-bf292e7d2152 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  1 09:50:05 compute-0 nova_compute[189491]: 2025-12-01 09:50:05.026 189495 DEBUG nova.virt.libvirt.host [None req-0b7d50fc-15d2-4e07-ac8b-bf292e7d2152 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  1 09:50:05 compute-0 nova_compute[189491]: 2025-12-01 09:50:05.026 189495 DEBUG nova.virt.libvirt.driver [None req-0b7d50fc-15d2-4e07-ac8b-bf292e7d2152 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  1 09:50:05 compute-0 nova_compute[189491]: 2025-12-01 09:50:05.026 189495 DEBUG nova.virt.hardware [None req-0b7d50fc-15d2-4e07-ac8b-bf292e7d2152 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-01T09:41:32Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='422f041c-a187-4aa2-8167-37f3eb0e89c2',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-01T09:44:52Z,direct_url=<?>,disk_format='qcow2',id=280f4e4d-4a12-4164-a687-6106a9afc7fe,min_disk=0,min_ram=0,name='tempest-scenario-img--1642109444',owner='6d5294cc5ac64b22a4a0f770b8d8bc61',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-01T09:44:53Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  1 09:50:05 compute-0 nova_compute[189491]: 2025-12-01 09:50:05.027 189495 DEBUG nova.virt.hardware [None req-0b7d50fc-15d2-4e07-ac8b-bf292e7d2152 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  1 09:50:05 compute-0 nova_compute[189491]: 2025-12-01 09:50:05.027 189495 DEBUG nova.virt.hardware [None req-0b7d50fc-15d2-4e07-ac8b-bf292e7d2152 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  1 09:50:05 compute-0 nova_compute[189491]: 2025-12-01 09:50:05.028 189495 DEBUG nova.virt.hardware [None req-0b7d50fc-15d2-4e07-ac8b-bf292e7d2152 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  1 09:50:05 compute-0 nova_compute[189491]: 2025-12-01 09:50:05.028 189495 DEBUG nova.virt.hardware [None req-0b7d50fc-15d2-4e07-ac8b-bf292e7d2152 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  1 09:50:05 compute-0 nova_compute[189491]: 2025-12-01 09:50:05.028 189495 DEBUG nova.virt.hardware [None req-0b7d50fc-15d2-4e07-ac8b-bf292e7d2152 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  1 09:50:05 compute-0 nova_compute[189491]: 2025-12-01 09:50:05.029 189495 DEBUG nova.virt.hardware [None req-0b7d50fc-15d2-4e07-ac8b-bf292e7d2152 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  1 09:50:05 compute-0 nova_compute[189491]: 2025-12-01 09:50:05.029 189495 DEBUG nova.virt.hardware [None req-0b7d50fc-15d2-4e07-ac8b-bf292e7d2152 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  1 09:50:05 compute-0 nova_compute[189491]: 2025-12-01 09:50:05.029 189495 DEBUG nova.virt.hardware [None req-0b7d50fc-15d2-4e07-ac8b-bf292e7d2152 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  1 09:50:05 compute-0 nova_compute[189491]: 2025-12-01 09:50:05.030 189495 DEBUG nova.virt.hardware [None req-0b7d50fc-15d2-4e07-ac8b-bf292e7d2152 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  1 09:50:05 compute-0 nova_compute[189491]: 2025-12-01 09:50:05.030 189495 DEBUG nova.virt.hardware [None req-0b7d50fc-15d2-4e07-ac8b-bf292e7d2152 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  1 09:50:05 compute-0 nova_compute[189491]: 2025-12-01 09:50:05.034 189495 DEBUG nova.virt.libvirt.vif [None req-0b7d50fc-15d2-4e07-ac8b-bf292e7d2152 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-01T09:49:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='te-8664732-asg-zzzrimsgcaeu-wsvolr2mhgm2-s6bg7htmycz5',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-8664732-asg-zzzrimsgcaeu-wsvolr2mhgm2-s6bg7htmycz5',id=15,image_ref='280f4e4d-4a12-4164-a687-6106a9afc7fe',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='e03937ad-4d2d-4edc-9b33-ed8d878566ca'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6d5294cc5ac64b22a4a0f770b8d8bc61',ramdisk_id='',reservation_id='r-nfp6qkos',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='280f4e4d-4a12-4164-a687-6106a9afc7fe',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-PrometheusGabbiTest-1348038279',owner_user_name='tempest-PrometheusGabbiTest-1348038279-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-01T09:50:00Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='c54f3a4a232b4a739be88e97f2094d4f',uuid=be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "01cbdc1d-a86f-411f-a8e1-8a4166f063d3", "address": "fa:16:3e:37:35:95", "network": {"id": "cf0577af-a5ed-496f-aa24-ae4d86898e85", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.35", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d5294cc5ac64b22a4a0f770b8d8bc61", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01cbdc1d-a8", "ovs_interfaceid": "01cbdc1d-a86f-411f-a8e1-8a4166f063d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  1 09:50:05 compute-0 nova_compute[189491]: 2025-12-01 09:50:05.034 189495 DEBUG nova.network.os_vif_util [None req-0b7d50fc-15d2-4e07-ac8b-bf292e7d2152 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Converting VIF {"id": "01cbdc1d-a86f-411f-a8e1-8a4166f063d3", "address": "fa:16:3e:37:35:95", "network": {"id": "cf0577af-a5ed-496f-aa24-ae4d86898e85", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.35", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d5294cc5ac64b22a4a0f770b8d8bc61", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01cbdc1d-a8", "ovs_interfaceid": "01cbdc1d-a86f-411f-a8e1-8a4166f063d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 09:50:05 compute-0 nova_compute[189491]: 2025-12-01 09:50:05.035 189495 DEBUG nova.network.os_vif_util [None req-0b7d50fc-15d2-4e07-ac8b-bf292e7d2152 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:37:35:95,bridge_name='br-int',has_traffic_filtering=True,id=01cbdc1d-a86f-411f-a8e1-8a4166f063d3,network=Network(cf0577af-a5ed-496f-aa24-ae4d86898e85),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap01cbdc1d-a8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 09:50:05 compute-0 nova_compute[189491]: 2025-12-01 09:50:05.036 189495 DEBUG nova.objects.instance [None req-0b7d50fc-15d2-4e07-ac8b-bf292e7d2152 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Lazy-loading 'pci_devices' on Instance uuid be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 09:50:05 compute-0 nova_compute[189491]: 2025-12-01 09:50:05.051 189495 DEBUG nova.virt.libvirt.driver [None req-0b7d50fc-15d2-4e07-ac8b-bf292e7d2152 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] [instance: be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2] End _get_guest_xml xml=<domain type="kvm">
Dec  1 09:50:05 compute-0 nova_compute[189491]:  <uuid>be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2</uuid>
Dec  1 09:50:05 compute-0 nova_compute[189491]:  <name>instance-0000000f</name>
Dec  1 09:50:05 compute-0 nova_compute[189491]:  <memory>131072</memory>
Dec  1 09:50:05 compute-0 nova_compute[189491]:  <vcpu>1</vcpu>
Dec  1 09:50:05 compute-0 nova_compute[189491]:  <metadata>
Dec  1 09:50:05 compute-0 nova_compute[189491]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  1 09:50:05 compute-0 nova_compute[189491]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  1 09:50:05 compute-0 nova_compute[189491]:      <nova:name>te-8664732-asg-zzzrimsgcaeu-wsvolr2mhgm2-s6bg7htmycz5</nova:name>
Dec  1 09:50:05 compute-0 nova_compute[189491]:      <nova:creationTime>2025-12-01 09:50:05</nova:creationTime>
Dec  1 09:50:05 compute-0 nova_compute[189491]:      <nova:flavor name="m1.nano">
Dec  1 09:50:05 compute-0 nova_compute[189491]:        <nova:memory>128</nova:memory>
Dec  1 09:50:05 compute-0 nova_compute[189491]:        <nova:disk>1</nova:disk>
Dec  1 09:50:05 compute-0 nova_compute[189491]:        <nova:swap>0</nova:swap>
Dec  1 09:50:05 compute-0 nova_compute[189491]:        <nova:ephemeral>0</nova:ephemeral>
Dec  1 09:50:05 compute-0 nova_compute[189491]:        <nova:vcpus>1</nova:vcpus>
Dec  1 09:50:05 compute-0 nova_compute[189491]:      </nova:flavor>
Dec  1 09:50:05 compute-0 nova_compute[189491]:      <nova:owner>
Dec  1 09:50:05 compute-0 nova_compute[189491]:        <nova:user uuid="c54f3a4a232b4a739be88e97f2094d4f">tempest-PrometheusGabbiTest-1348038279-project-member</nova:user>
Dec  1 09:50:05 compute-0 nova_compute[189491]:        <nova:project uuid="6d5294cc5ac64b22a4a0f770b8d8bc61">tempest-PrometheusGabbiTest-1348038279</nova:project>
Dec  1 09:50:05 compute-0 nova_compute[189491]:      </nova:owner>
Dec  1 09:50:05 compute-0 nova_compute[189491]:      <nova:root type="image" uuid="280f4e4d-4a12-4164-a687-6106a9afc7fe"/>
Dec  1 09:50:05 compute-0 nova_compute[189491]:      <nova:ports>
Dec  1 09:50:05 compute-0 nova_compute[189491]:        <nova:port uuid="01cbdc1d-a86f-411f-a8e1-8a4166f063d3">
Dec  1 09:50:05 compute-0 nova_compute[189491]:          <nova:ip type="fixed" address="10.100.3.35" ipVersion="4"/>
Dec  1 09:50:05 compute-0 nova_compute[189491]:        </nova:port>
Dec  1 09:50:05 compute-0 nova_compute[189491]:      </nova:ports>
Dec  1 09:50:05 compute-0 nova_compute[189491]:    </nova:instance>
Dec  1 09:50:05 compute-0 nova_compute[189491]:  </metadata>
Dec  1 09:50:05 compute-0 nova_compute[189491]:  <sysinfo type="smbios">
Dec  1 09:50:05 compute-0 nova_compute[189491]:    <system>
Dec  1 09:50:05 compute-0 nova_compute[189491]:      <entry name="manufacturer">RDO</entry>
Dec  1 09:50:05 compute-0 nova_compute[189491]:      <entry name="product">OpenStack Compute</entry>
Dec  1 09:50:05 compute-0 nova_compute[189491]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  1 09:50:05 compute-0 nova_compute[189491]:      <entry name="serial">be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2</entry>
Dec  1 09:50:05 compute-0 nova_compute[189491]:      <entry name="uuid">be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2</entry>
Dec  1 09:50:05 compute-0 nova_compute[189491]:      <entry name="family">Virtual Machine</entry>
Dec  1 09:50:05 compute-0 nova_compute[189491]:    </system>
Dec  1 09:50:05 compute-0 nova_compute[189491]:  </sysinfo>
Dec  1 09:50:05 compute-0 nova_compute[189491]:  <os>
Dec  1 09:50:05 compute-0 nova_compute[189491]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  1 09:50:05 compute-0 nova_compute[189491]:    <boot dev="hd"/>
Dec  1 09:50:05 compute-0 nova_compute[189491]:    <smbios mode="sysinfo"/>
Dec  1 09:50:05 compute-0 nova_compute[189491]:  </os>
Dec  1 09:50:05 compute-0 nova_compute[189491]:  <features>
Dec  1 09:50:05 compute-0 nova_compute[189491]:    <acpi/>
Dec  1 09:50:05 compute-0 nova_compute[189491]:    <apic/>
Dec  1 09:50:05 compute-0 nova_compute[189491]:    <vmcoreinfo/>
Dec  1 09:50:05 compute-0 nova_compute[189491]:  </features>
Dec  1 09:50:05 compute-0 nova_compute[189491]:  <clock offset="utc">
Dec  1 09:50:05 compute-0 nova_compute[189491]:    <timer name="pit" tickpolicy="delay"/>
Dec  1 09:50:05 compute-0 nova_compute[189491]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  1 09:50:05 compute-0 nova_compute[189491]:    <timer name="hpet" present="no"/>
Dec  1 09:50:05 compute-0 nova_compute[189491]:  </clock>
Dec  1 09:50:05 compute-0 nova_compute[189491]:  <cpu mode="host-model" match="exact">
Dec  1 09:50:05 compute-0 nova_compute[189491]:    <topology sockets="1" cores="1" threads="1"/>
Dec  1 09:50:05 compute-0 nova_compute[189491]:  </cpu>
Dec  1 09:50:05 compute-0 nova_compute[189491]:  <devices>
Dec  1 09:50:05 compute-0 nova_compute[189491]:    <disk type="file" device="disk">
Dec  1 09:50:05 compute-0 nova_compute[189491]:      <driver name="qemu" type="qcow2" cache="none"/>
Dec  1 09:50:05 compute-0 nova_compute[189491]:      <source file="/var/lib/nova/instances/be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk"/>
Dec  1 09:50:05 compute-0 nova_compute[189491]:      <target dev="vda" bus="virtio"/>
Dec  1 09:50:05 compute-0 nova_compute[189491]:    </disk>
Dec  1 09:50:05 compute-0 nova_compute[189491]:    <disk type="file" device="cdrom">
Dec  1 09:50:05 compute-0 nova_compute[189491]:      <driver name="qemu" type="raw" cache="none"/>
Dec  1 09:50:05 compute-0 nova_compute[189491]:      <source file="/var/lib/nova/instances/be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk.config"/>
Dec  1 09:50:05 compute-0 nova_compute[189491]:      <target dev="sda" bus="sata"/>
Dec  1 09:50:05 compute-0 nova_compute[189491]:    </disk>
Dec  1 09:50:05 compute-0 nova_compute[189491]:    <interface type="ethernet">
Dec  1 09:50:05 compute-0 nova_compute[189491]:      <mac address="fa:16:3e:37:35:95"/>
Dec  1 09:50:05 compute-0 nova_compute[189491]:      <model type="virtio"/>
Dec  1 09:50:05 compute-0 nova_compute[189491]:      <driver name="vhost" rx_queue_size="512"/>
Dec  1 09:50:05 compute-0 nova_compute[189491]:      <mtu size="1442"/>
Dec  1 09:50:05 compute-0 nova_compute[189491]:      <target dev="tap01cbdc1d-a8"/>
Dec  1 09:50:05 compute-0 nova_compute[189491]:    </interface>
Dec  1 09:50:05 compute-0 nova_compute[189491]:    <serial type="pty">
Dec  1 09:50:05 compute-0 nova_compute[189491]:      <log file="/var/lib/nova/instances/be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/console.log" append="off"/>
Dec  1 09:50:05 compute-0 nova_compute[189491]:    </serial>
Dec  1 09:50:05 compute-0 nova_compute[189491]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  1 09:50:05 compute-0 nova_compute[189491]:    <video>
Dec  1 09:50:05 compute-0 nova_compute[189491]:      <model type="virtio"/>
Dec  1 09:50:05 compute-0 nova_compute[189491]:    </video>
Dec  1 09:50:05 compute-0 nova_compute[189491]:    <input type="tablet" bus="usb"/>
Dec  1 09:50:05 compute-0 nova_compute[189491]:    <rng model="virtio">
Dec  1 09:50:05 compute-0 nova_compute[189491]:      <backend model="random">/dev/urandom</backend>
Dec  1 09:50:05 compute-0 nova_compute[189491]:    </rng>
Dec  1 09:50:05 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root"/>
Dec  1 09:50:05 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:50:05 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:50:05 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:50:05 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:50:05 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:50:05 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:50:05 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:50:05 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:50:05 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:50:05 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:50:05 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:50:05 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:50:05 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:50:05 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:50:05 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:50:05 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:50:05 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:50:05 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:50:05 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:50:05 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:50:05 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:50:05 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:50:05 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:50:05 compute-0 nova_compute[189491]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 09:50:05 compute-0 nova_compute[189491]:    <controller type="usb" index="0"/>
Dec  1 09:50:05 compute-0 nova_compute[189491]:    <memballoon model="virtio">
Dec  1 09:50:05 compute-0 nova_compute[189491]:      <stats period="10"/>
Dec  1 09:50:05 compute-0 nova_compute[189491]:    </memballoon>
Dec  1 09:50:05 compute-0 nova_compute[189491]:  </devices>
Dec  1 09:50:05 compute-0 nova_compute[189491]: </domain>
Dec  1 09:50:05 compute-0 nova_compute[189491]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  1 09:50:05 compute-0 nova_compute[189491]: 2025-12-01 09:50:05.052 189495 DEBUG nova.compute.manager [None req-0b7d50fc-15d2-4e07-ac8b-bf292e7d2152 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] [instance: be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2] Preparing to wait for external event network-vif-plugged-01cbdc1d-a86f-411f-a8e1-8a4166f063d3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  1 09:50:05 compute-0 nova_compute[189491]: 2025-12-01 09:50:05.053 189495 DEBUG oslo_concurrency.lockutils [None req-0b7d50fc-15d2-4e07-ac8b-bf292e7d2152 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Acquiring lock "be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:50:05 compute-0 nova_compute[189491]: 2025-12-01 09:50:05.053 189495 DEBUG oslo_concurrency.lockutils [None req-0b7d50fc-15d2-4e07-ac8b-bf292e7d2152 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Lock "be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:50:05 compute-0 nova_compute[189491]: 2025-12-01 09:50:05.053 189495 DEBUG oslo_concurrency.lockutils [None req-0b7d50fc-15d2-4e07-ac8b-bf292e7d2152 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Lock "be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:50:05 compute-0 nova_compute[189491]: 2025-12-01 09:50:05.054 189495 DEBUG nova.virt.libvirt.vif [None req-0b7d50fc-15d2-4e07-ac8b-bf292e7d2152 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-01T09:49:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='te-8664732-asg-zzzrimsgcaeu-wsvolr2mhgm2-s6bg7htmycz5',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-8664732-asg-zzzrimsgcaeu-wsvolr2mhgm2-s6bg7htmycz5',id=15,image_ref='280f4e4d-4a12-4164-a687-6106a9afc7fe',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='e03937ad-4d2d-4edc-9b33-ed8d878566ca'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6d5294cc5ac64b22a4a0f770b8d8bc61',ramdisk_id='',reservation_id='r-nfp6qkos',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='280f4e4d-4a12-4164-a687-6106a9afc7fe',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-PrometheusGabbiTest-1348038279',owner_user_name='tempest-PrometheusGabbiTest-1348038279-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-01T09:50:00Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='c54f3a4a232b4a739be88e97f2094d4f',uuid=be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "01cbdc1d-a86f-411f-a8e1-8a4166f063d3", "address": "fa:16:3e:37:35:95", "network": {"id": "cf0577af-a5ed-496f-aa24-ae4d86898e85", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.35", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d5294cc5ac64b22a4a0f770b8d8bc61", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01cbdc1d-a8", "ovs_interfaceid": "01cbdc1d-a86f-411f-a8e1-8a4166f063d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  1 09:50:05 compute-0 nova_compute[189491]: 2025-12-01 09:50:05.055 189495 DEBUG nova.network.os_vif_util [None req-0b7d50fc-15d2-4e07-ac8b-bf292e7d2152 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Converting VIF {"id": "01cbdc1d-a86f-411f-a8e1-8a4166f063d3", "address": "fa:16:3e:37:35:95", "network": {"id": "cf0577af-a5ed-496f-aa24-ae4d86898e85", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.35", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d5294cc5ac64b22a4a0f770b8d8bc61", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01cbdc1d-a8", "ovs_interfaceid": "01cbdc1d-a86f-411f-a8e1-8a4166f063d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 09:50:05 compute-0 nova_compute[189491]: 2025-12-01 09:50:05.056 189495 DEBUG nova.network.os_vif_util [None req-0b7d50fc-15d2-4e07-ac8b-bf292e7d2152 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:37:35:95,bridge_name='br-int',has_traffic_filtering=True,id=01cbdc1d-a86f-411f-a8e1-8a4166f063d3,network=Network(cf0577af-a5ed-496f-aa24-ae4d86898e85),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap01cbdc1d-a8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 09:50:05 compute-0 nova_compute[189491]: 2025-12-01 09:50:05.056 189495 DEBUG os_vif [None req-0b7d50fc-15d2-4e07-ac8b-bf292e7d2152 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:37:35:95,bridge_name='br-int',has_traffic_filtering=True,id=01cbdc1d-a86f-411f-a8e1-8a4166f063d3,network=Network(cf0577af-a5ed-496f-aa24-ae4d86898e85),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap01cbdc1d-a8') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  1 09:50:05 compute-0 nova_compute[189491]: 2025-12-01 09:50:05.057 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:50:05 compute-0 nova_compute[189491]: 2025-12-01 09:50:05.057 189495 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:50:05 compute-0 nova_compute[189491]: 2025-12-01 09:50:05.058 189495 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 09:50:05 compute-0 nova_compute[189491]: 2025-12-01 09:50:05.061 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:50:05 compute-0 nova_compute[189491]: 2025-12-01 09:50:05.061 189495 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap01cbdc1d-a8, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:50:05 compute-0 nova_compute[189491]: 2025-12-01 09:50:05.062 189495 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap01cbdc1d-a8, col_values=(('external_ids', {'iface-id': '01cbdc1d-a86f-411f-a8e1-8a4166f063d3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:37:35:95', 'vm-uuid': 'be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:50:05 compute-0 nova_compute[189491]: 2025-12-01 09:50:05.064 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:50:05 compute-0 NetworkManager[56318]: <info>  [1764582605.0660] manager: (tap01cbdc1d-a8): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/74)
Dec  1 09:50:05 compute-0 nova_compute[189491]: 2025-12-01 09:50:05.066 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  1 09:50:05 compute-0 nova_compute[189491]: 2025-12-01 09:50:05.075 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:50:05 compute-0 nova_compute[189491]: 2025-12-01 09:50:05.077 189495 INFO os_vif [None req-0b7d50fc-15d2-4e07-ac8b-bf292e7d2152 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:37:35:95,bridge_name='br-int',has_traffic_filtering=True,id=01cbdc1d-a86f-411f-a8e1-8a4166f063d3,network=Network(cf0577af-a5ed-496f-aa24-ae4d86898e85),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap01cbdc1d-a8')#033[00m
Dec  1 09:50:05 compute-0 nova_compute[189491]: 2025-12-01 09:50:05.138 189495 DEBUG nova.virt.libvirt.driver [None req-0b7d50fc-15d2-4e07-ac8b-bf292e7d2152 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  1 09:50:05 compute-0 nova_compute[189491]: 2025-12-01 09:50:05.138 189495 DEBUG nova.virt.libvirt.driver [None req-0b7d50fc-15d2-4e07-ac8b-bf292e7d2152 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  1 09:50:05 compute-0 nova_compute[189491]: 2025-12-01 09:50:05.139 189495 DEBUG nova.virt.libvirt.driver [None req-0b7d50fc-15d2-4e07-ac8b-bf292e7d2152 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] No VIF found with MAC fa:16:3e:37:35:95, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  1 09:50:05 compute-0 nova_compute[189491]: 2025-12-01 09:50:05.139 189495 INFO nova.virt.libvirt.driver [None req-0b7d50fc-15d2-4e07-ac8b-bf292e7d2152 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] [instance: be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2] Using config drive#033[00m
Dec  1 09:50:05 compute-0 nova_compute[189491]: 2025-12-01 09:50:05.478 189495 INFO nova.virt.libvirt.driver [None req-0b7d50fc-15d2-4e07-ac8b-bf292e7d2152 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] [instance: be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2] Creating config drive at /var/lib/nova/instances/be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk.config#033[00m
Dec  1 09:50:05 compute-0 nova_compute[189491]: 2025-12-01 09:50:05.485 189495 DEBUG oslo_concurrency.processutils [None req-0b7d50fc-15d2-4e07-ac8b-bf292e7d2152 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6iz4iykp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:50:05 compute-0 nova_compute[189491]: 2025-12-01 09:50:05.618 189495 DEBUG oslo_concurrency.processutils [None req-0b7d50fc-15d2-4e07-ac8b-bf292e7d2152 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6iz4iykp" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:50:05 compute-0 kernel: tap01cbdc1d-a8: entered promiscuous mode
Dec  1 09:50:05 compute-0 NetworkManager[56318]: <info>  [1764582605.7067] manager: (tap01cbdc1d-a8): new Tun device (/org/freedesktop/NetworkManager/Devices/75)
Dec  1 09:50:05 compute-0 ovn_controller[97794]: 2025-12-01T09:50:05Z|00175|binding|INFO|Claiming lport 01cbdc1d-a86f-411f-a8e1-8a4166f063d3 for this chassis.
Dec  1 09:50:05 compute-0 ovn_controller[97794]: 2025-12-01T09:50:05Z|00176|binding|INFO|01cbdc1d-a86f-411f-a8e1-8a4166f063d3: Claiming fa:16:3e:37:35:95 10.100.3.35
Dec  1 09:50:05 compute-0 nova_compute[189491]: 2025-12-01 09:50:05.725 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:50:05 compute-0 ovn_controller[97794]: 2025-12-01T09:50:05Z|00177|binding|INFO|Setting lport 01cbdc1d-a86f-411f-a8e1-8a4166f063d3 ovn-installed in OVS
Dec  1 09:50:05 compute-0 nova_compute[189491]: 2025-12-01 09:50:05.742 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:50:05 compute-0 systemd-machined[155812]: New machine qemu-16-instance-0000000f.
Dec  1 09:50:05 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:50:05.764 106659 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:37:35:95 10.100.3.35'], port_security=['fa:16:3e:37:35:95 10.100.3.35'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.3.35/16', 'neutron:device_id': 'be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cf0577af-a5ed-496f-aa24-ae4d86898e85', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6d5294cc5ac64b22a4a0f770b8d8bc61', 'neutron:revision_number': '2', 'neutron:security_group_ids': '43f98091-3f01-4ffd-9cb2-02d78ab9f60c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0c2dbc4a-f4e0-49c5-bb92-4872f344781e, chassis=[<ovs.db.idl.Row object at 0x7ff7f12c3670>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff7f12c3670>], logical_port=01cbdc1d-a86f-411f-a8e1-8a4166f063d3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 09:50:05 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:50:05.766 106659 INFO neutron.agent.ovn.metadata.agent [-] Port 01cbdc1d-a86f-411f-a8e1-8a4166f063d3 in datapath cf0577af-a5ed-496f-aa24-ae4d86898e85 bound to our chassis#033[00m
Dec  1 09:50:05 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:50:05.767 106659 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cf0577af-a5ed-496f-aa24-ae4d86898e85#033[00m
Dec  1 09:50:05 compute-0 ovn_controller[97794]: 2025-12-01T09:50:05Z|00178|binding|INFO|Setting lport 01cbdc1d-a86f-411f-a8e1-8a4166f063d3 up in Southbound
Dec  1 09:50:05 compute-0 systemd[1]: Started Virtual Machine qemu-16-instance-0000000f.
Dec  1 09:50:05 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:50:05.785 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[2d5f6d5c-dddb-451a-bcad-26e8447b201a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:50:05 compute-0 systemd-udevd[256665]: Network interface NamePolicy= disabled on kernel command line.
Dec  1 09:50:05 compute-0 NetworkManager[56318]: <info>  [1764582605.8222] device (tap01cbdc1d-a8): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  1 09:50:05 compute-0 NetworkManager[56318]: <info>  [1764582605.8298] device (tap01cbdc1d-a8): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  1 09:50:05 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:50:05.828 239843 DEBUG oslo.privsep.daemon [-] privsep: reply[1835c82b-afd4-4d52-8e99-a26320eb314d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:50:05 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:50:05.834 239843 DEBUG oslo.privsep.daemon [-] privsep: reply[e96511e1-5a38-43b9-bd5f-538243273c10]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:50:05 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:50:05.859 239843 DEBUG oslo.privsep.daemon [-] privsep: reply[0f365e28-117d-4f13-bde8-a9917e11fed9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:50:05 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:50:05.875 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[e4314911-8efb-4af9-a849-40280c74697c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcf0577af-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2f:ac:52'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 37], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 554552, 'reachable_time': 42820, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 256675, 'error': None, 'target': 'ovnmeta-cf0577af-a5ed-496f-aa24-ae4d86898e85', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:50:05 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:50:05.898 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[c512bbaa-9a05-48ef-9d58-dc785030343f]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapcf0577af-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 554566, 'tstamp': 554566}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 256677, 'error': None, 'target': 'ovnmeta-cf0577af-a5ed-496f-aa24-ae4d86898e85', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 16, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.255.255'], ['IFA_LABEL', 'tapcf0577af-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 554571, 'tstamp': 554571}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 256677, 'error': None, 'target': 'ovnmeta-cf0577af-a5ed-496f-aa24-ae4d86898e85', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:50:05 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:50:05.900 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcf0577af-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:50:05 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:50:05.903 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcf0577af-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:50:05 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:50:05.903 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 09:50:05 compute-0 nova_compute[189491]: 2025-12-01 09:50:05.903 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:50:05 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:50:05.904 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcf0577af-a0, col_values=(('external_ids', {'iface-id': '7159c06b-520e-4157-9235-0b4ddbac66cf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:50:05 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:50:05.904 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 09:50:06 compute-0 systemd[1]: Starting libvirt proxy daemon...
Dec  1 09:50:06 compute-0 systemd[1]: Started libvirt proxy daemon.
Dec  1 09:50:06 compute-0 nova_compute[189491]: 2025-12-01 09:50:06.593 189495 DEBUG nova.virt.driver [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] Emitting event <LifecycleEvent: 1764582606.5923343, be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 09:50:06 compute-0 nova_compute[189491]: 2025-12-01 09:50:06.594 189495 INFO nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2] VM Started (Lifecycle Event)#033[00m
Dec  1 09:50:06 compute-0 nova_compute[189491]: 2025-12-01 09:50:06.690 189495 DEBUG nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 09:50:06 compute-0 nova_compute[189491]: 2025-12-01 09:50:06.698 189495 DEBUG nova.virt.driver [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] Emitting event <LifecycleEvent: 1764582606.5925198, be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 09:50:06 compute-0 nova_compute[189491]: 2025-12-01 09:50:06.699 189495 INFO nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2] VM Paused (Lifecycle Event)#033[00m
Dec  1 09:50:06 compute-0 nova_compute[189491]: 2025-12-01 09:50:06.720 189495 DEBUG nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 09:50:06 compute-0 nova_compute[189491]: 2025-12-01 09:50:06.726 189495 DEBUG nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  1 09:50:06 compute-0 nova_compute[189491]: 2025-12-01 09:50:06.746 189495 INFO nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  1 09:50:07 compute-0 nova_compute[189491]: 2025-12-01 09:50:07.187 189495 DEBUG nova.compute.manager [req-e2984226-0f70-49a2-a687-fb32210dfc40 req-b9ae5113-eee2-4653-a943-b85123630b42 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2] Received event network-vif-plugged-01cbdc1d-a86f-411f-a8e1-8a4166f063d3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 09:50:07 compute-0 nova_compute[189491]: 2025-12-01 09:50:07.187 189495 DEBUG oslo_concurrency.lockutils [req-e2984226-0f70-49a2-a687-fb32210dfc40 req-b9ae5113-eee2-4653-a943-b85123630b42 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Acquiring lock "be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:50:07 compute-0 nova_compute[189491]: 2025-12-01 09:50:07.188 189495 DEBUG oslo_concurrency.lockutils [req-e2984226-0f70-49a2-a687-fb32210dfc40 req-b9ae5113-eee2-4653-a943-b85123630b42 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Lock "be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:50:07 compute-0 nova_compute[189491]: 2025-12-01 09:50:07.188 189495 DEBUG oslo_concurrency.lockutils [req-e2984226-0f70-49a2-a687-fb32210dfc40 req-b9ae5113-eee2-4653-a943-b85123630b42 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Lock "be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:50:07 compute-0 nova_compute[189491]: 2025-12-01 09:50:07.189 189495 DEBUG nova.compute.manager [req-e2984226-0f70-49a2-a687-fb32210dfc40 req-b9ae5113-eee2-4653-a943-b85123630b42 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2] Processing event network-vif-plugged-01cbdc1d-a86f-411f-a8e1-8a4166f063d3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  1 09:50:07 compute-0 nova_compute[189491]: 2025-12-01 09:50:07.190 189495 DEBUG nova.compute.manager [None req-0b7d50fc-15d2-4e07-ac8b-bf292e7d2152 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] [instance: be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  1 09:50:07 compute-0 nova_compute[189491]: 2025-12-01 09:50:07.195 189495 DEBUG nova.virt.driver [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] Emitting event <LifecycleEvent: 1764582607.194896, be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 09:50:07 compute-0 nova_compute[189491]: 2025-12-01 09:50:07.196 189495 INFO nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2] VM Resumed (Lifecycle Event)#033[00m
Dec  1 09:50:07 compute-0 nova_compute[189491]: 2025-12-01 09:50:07.199 189495 DEBUG nova.virt.libvirt.driver [None req-0b7d50fc-15d2-4e07-ac8b-bf292e7d2152 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] [instance: be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  1 09:50:07 compute-0 nova_compute[189491]: 2025-12-01 09:50:07.206 189495 INFO nova.virt.libvirt.driver [-] [instance: be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2] Instance spawned successfully.#033[00m
Dec  1 09:50:07 compute-0 nova_compute[189491]: 2025-12-01 09:50:07.206 189495 DEBUG nova.virt.libvirt.driver [None req-0b7d50fc-15d2-4e07-ac8b-bf292e7d2152 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] [instance: be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  1 09:50:07 compute-0 nova_compute[189491]: 2025-12-01 09:50:07.238 189495 DEBUG nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 09:50:07 compute-0 nova_compute[189491]: 2025-12-01 09:50:07.243 189495 DEBUG nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  1 09:50:07 compute-0 nova_compute[189491]: 2025-12-01 09:50:07.254 189495 DEBUG nova.virt.libvirt.driver [None req-0b7d50fc-15d2-4e07-ac8b-bf292e7d2152 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] [instance: be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 09:50:07 compute-0 nova_compute[189491]: 2025-12-01 09:50:07.255 189495 DEBUG nova.virt.libvirt.driver [None req-0b7d50fc-15d2-4e07-ac8b-bf292e7d2152 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] [instance: be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 09:50:07 compute-0 nova_compute[189491]: 2025-12-01 09:50:07.255 189495 DEBUG nova.virt.libvirt.driver [None req-0b7d50fc-15d2-4e07-ac8b-bf292e7d2152 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] [instance: be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 09:50:07 compute-0 nova_compute[189491]: 2025-12-01 09:50:07.256 189495 DEBUG nova.virt.libvirt.driver [None req-0b7d50fc-15d2-4e07-ac8b-bf292e7d2152 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] [instance: be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 09:50:07 compute-0 nova_compute[189491]: 2025-12-01 09:50:07.257 189495 DEBUG nova.virt.libvirt.driver [None req-0b7d50fc-15d2-4e07-ac8b-bf292e7d2152 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] [instance: be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 09:50:07 compute-0 nova_compute[189491]: 2025-12-01 09:50:07.257 189495 DEBUG nova.virt.libvirt.driver [None req-0b7d50fc-15d2-4e07-ac8b-bf292e7d2152 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] [instance: be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 09:50:07 compute-0 nova_compute[189491]: 2025-12-01 09:50:07.280 189495 INFO nova.compute.manager [None req-77a2257b-d591-472e-83f5-674811d0e9db - - - - - -] [instance: be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  1 09:50:07 compute-0 nova_compute[189491]: 2025-12-01 09:50:07.313 189495 INFO nova.compute.manager [None req-0b7d50fc-15d2-4e07-ac8b-bf292e7d2152 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] [instance: be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2] Took 7.17 seconds to spawn the instance on the hypervisor.#033[00m
Dec  1 09:50:07 compute-0 nova_compute[189491]: 2025-12-01 09:50:07.314 189495 DEBUG nova.compute.manager [None req-0b7d50fc-15d2-4e07-ac8b-bf292e7d2152 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] [instance: be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 09:50:07 compute-0 nova_compute[189491]: 2025-12-01 09:50:07.380 189495 DEBUG nova.network.neutron [req-3310046c-8be2-4311-a9b7-6a4a5e0e7ba3 req-7853cce3-070a-49ba-997c-f4d699cbe9bd ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2] Updated VIF entry in instance network info cache for port 01cbdc1d-a86f-411f-a8e1-8a4166f063d3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  1 09:50:07 compute-0 nova_compute[189491]: 2025-12-01 09:50:07.382 189495 DEBUG nova.network.neutron [req-3310046c-8be2-4311-a9b7-6a4a5e0e7ba3 req-7853cce3-070a-49ba-997c-f4d699cbe9bd ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2] Updating instance_info_cache with network_info: [{"id": "01cbdc1d-a86f-411f-a8e1-8a4166f063d3", "address": "fa:16:3e:37:35:95", "network": {"id": "cf0577af-a5ed-496f-aa24-ae4d86898e85", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.35", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d5294cc5ac64b22a4a0f770b8d8bc61", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01cbdc1d-a8", "ovs_interfaceid": "01cbdc1d-a86f-411f-a8e1-8a4166f063d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 09:50:07 compute-0 nova_compute[189491]: 2025-12-01 09:50:07.387 189495 INFO nova.compute.manager [None req-0b7d50fc-15d2-4e07-ac8b-bf292e7d2152 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] [instance: be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2] Took 8.00 seconds to build instance.#033[00m
Dec  1 09:50:07 compute-0 nova_compute[189491]: 2025-12-01 09:50:07.401 189495 DEBUG oslo_concurrency.lockutils [req-3310046c-8be2-4311-a9b7-6a4a5e0e7ba3 req-7853cce3-070a-49ba-997c-f4d699cbe9bd ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Releasing lock "refresh_cache-be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 09:50:07 compute-0 nova_compute[189491]: 2025-12-01 09:50:07.404 189495 DEBUG oslo_concurrency.lockutils [None req-0b7d50fc-15d2-4e07-ac8b-bf292e7d2152 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Lock "be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.074s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:50:08 compute-0 nova_compute[189491]: 2025-12-01 09:50:08.329 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:50:09 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:50:09.250 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=203a4433-d8f4-4d80-8084-548a6d57cd5d, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '16'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:50:09 compute-0 nova_compute[189491]: 2025-12-01 09:50:09.276 189495 DEBUG nova.compute.manager [req-3aa49885-15c3-406b-a73e-9976083e0bd8 req-c303aaee-7c69-4810-9a8c-e9dccbe8ca8f ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2] Received event network-vif-plugged-01cbdc1d-a86f-411f-a8e1-8a4166f063d3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 09:50:09 compute-0 nova_compute[189491]: 2025-12-01 09:50:09.277 189495 DEBUG oslo_concurrency.lockutils [req-3aa49885-15c3-406b-a73e-9976083e0bd8 req-c303aaee-7c69-4810-9a8c-e9dccbe8ca8f ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Acquiring lock "be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:50:09 compute-0 nova_compute[189491]: 2025-12-01 09:50:09.278 189495 DEBUG oslo_concurrency.lockutils [req-3aa49885-15c3-406b-a73e-9976083e0bd8 req-c303aaee-7c69-4810-9a8c-e9dccbe8ca8f ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Lock "be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:50:09 compute-0 nova_compute[189491]: 2025-12-01 09:50:09.278 189495 DEBUG oslo_concurrency.lockutils [req-3aa49885-15c3-406b-a73e-9976083e0bd8 req-c303aaee-7c69-4810-9a8c-e9dccbe8ca8f ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Lock "be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:50:09 compute-0 nova_compute[189491]: 2025-12-01 09:50:09.279 189495 DEBUG nova.compute.manager [req-3aa49885-15c3-406b-a73e-9976083e0bd8 req-c303aaee-7c69-4810-9a8c-e9dccbe8ca8f ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2] No waiting events found dispatching network-vif-plugged-01cbdc1d-a86f-411f-a8e1-8a4166f063d3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 09:50:09 compute-0 nova_compute[189491]: 2025-12-01 09:50:09.279 189495 WARNING nova.compute.manager [req-3aa49885-15c3-406b-a73e-9976083e0bd8 req-c303aaee-7c69-4810-9a8c-e9dccbe8ca8f ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2] Received unexpected event network-vif-plugged-01cbdc1d-a86f-411f-a8e1-8a4166f063d3 for instance with vm_state active and task_state None.#033[00m
Dec  1 09:50:10 compute-0 nova_compute[189491]: 2025-12-01 09:50:10.065 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:50:10 compute-0 podman[256705]: 2025-12-01 09:50:10.705658051 +0000 UTC m=+0.077610967 container health_status dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 09:50:10 compute-0 podman[256707]: 2025-12-01 09:50:10.711313039 +0000 UTC m=+0.078206241 container health_status f2d8639519b361376780abb83cb5a5e341c4f412720fb5852b2a4d261a69c359 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=edpm, architecture=x86_64, com.redhat.component=ubi9-container, release=1214.1726694543, vcs-type=git, release-0.7.12=, io.openshift.expose-services=, container_name=kepler, name=ubi9, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, io.openshift.tags=base rhel9, vendor=Red Hat, Inc.)
Dec  1 09:50:10 compute-0 podman[256706]: 2025-12-01 09:50:10.740251275 +0000 UTC m=+0.107475275 container health_status e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec  1 09:50:13 compute-0 nova_compute[189491]: 2025-12-01 09:50:13.331 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:50:15 compute-0 nova_compute[189491]: 2025-12-01 09:50:15.070 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:50:18 compute-0 nova_compute[189491]: 2025-12-01 09:50:18.333 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:50:18 compute-0 podman[256762]: 2025-12-01 09:50:18.45579528 +0000 UTC m=+0.080730902 container health_status f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec  1 09:50:18 compute-0 podman[256761]: 2025-12-01 09:50:18.506359865 +0000 UTC m=+0.134278620 container health_status 110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., managed_by=edpm_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, architecture=x86_64)
Dec  1 09:50:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:19.796 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  1 09:50:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:19.797 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  1 09:50:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:19.797 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b020>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:50:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:19.798 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7ff84c98b0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:50:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:19.798 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84dc55100>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:50:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:19.798 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:50:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:19.799 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:50:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:19.799 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:50:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:19.799 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84ca1c260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:50:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:19.799 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:50:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:19.800 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:50:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:19.800 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84ff01af0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:50:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:19.800 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bb30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:50:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:19.801 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:50:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:19.801 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bb60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:50:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:19.801 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:50:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:19.802 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bbc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:50:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:19.803 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:50:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:19.803 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:50:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:19.803 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:50:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:19.804 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Dec  1 09:50:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:19.804 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:50:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:19.806 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b5f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:50:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:19.806 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:50:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:19.806 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}5b15b15c247f410e52837a95689cb091041b96c474d34a98b1d5f06140c01501" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Dec  1 09:50:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:19.807 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98be60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:50:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:19.807 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84f216690>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:50:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:19.808 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c9896d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:50:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:19.809 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:50:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:19.809 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98a720>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:50:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:19.809 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:50:20 compute-0 nova_compute[189491]: 2025-12-01 09:50:20.074 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.373 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1831 Content-Type: application/json Date: Mon, 01 Dec 2025 09:50:19 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-d62443e9-bdf5-472a-9905-e61155c46092 x-openstack-request-id: req-d62443e9-bdf5-472a-9905-e61155c46092 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.373 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2", "name": "te-8664732-asg-zzzrimsgcaeu-wsvolr2mhgm2-s6bg7htmycz5", "status": "ACTIVE", "tenant_id": "6d5294cc5ac64b22a4a0f770b8d8bc61", "user_id": "c54f3a4a232b4a739be88e97f2094d4f", "metadata": {"metering.server_group": "e03937ad-4d2d-4edc-9b33-ed8d878566ca"}, "hostId": "b9c6fdac1e98b24aca6852a4c44644f8d936ac2e3843f1f4b4c15406", "image": {"id": "280f4e4d-4a12-4164-a687-6106a9afc7fe", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/280f4e4d-4a12-4164-a687-6106a9afc7fe"}]}, "flavor": {"id": "422f041c-a187-4aa2-8167-37f3eb0e89c2", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/422f041c-a187-4aa2-8167-37f3eb0e89c2"}]}, "created": "2025-12-01T09:49:58Z", "updated": "2025-12-01T09:50:07Z", "addresses": {"": [{"version": 4, "addr": "10.100.3.35", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:37:35:95"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-12-01T09:50:07.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "default"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-0000000f", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.373 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2 used request id req-d62443e9-bdf5-472a-9905-e61155c46092 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.374 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2', 'name': 'te-8664732-asg-zzzrimsgcaeu-wsvolr2mhgm2-s6bg7htmycz5', 'flavor': {'id': '422f041c-a187-4aa2-8167-37f3eb0e89c2', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '280f4e4d-4a12-4164-a687-6106a9afc7fe'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000f', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '6d5294cc5ac64b22a4a0f770b8d8bc61', 'user_id': 'c54f3a4a232b4a739be88e97f2094d4f', 'hostId': 'b9c6fdac1e98b24aca6852a4c44644f8d936ac2e3843f1f4b4c15406', 'status': 'active', 'metadata': {'metering.server_group': 'e03937ad-4d2d-4edc-9b33-ed8d878566ca'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.377 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'dc0d510c-4baf-4bcb-ab4f-de6ee48849c0', 'name': 'te-8664732-asg-zzzrimsgcaeu-gnecnnuukmep-lujrpewlzjs2', 'flavor': {'id': '422f041c-a187-4aa2-8167-37f3eb0e89c2', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '280f4e4d-4a12-4164-a687-6106a9afc7fe'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000b', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '6d5294cc5ac64b22a4a0f770b8d8bc61', 'user_id': 'c54f3a4a232b4a739be88e97f2094d4f', 'hostId': 'b9c6fdac1e98b24aca6852a4c44644f8d936ac2e3843f1f4b4c15406', 'status': 'active', 'metadata': {'metering.server_group': 'e03937ad-4d2d-4edc-9b33-ed8d878566ca'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.377 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.377 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b020>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.378 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b020>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.378 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.378 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-01T09:50:20.378145) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.415 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk.device.read.bytes volume: 23775232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.416 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk.device.read.bytes volume: 2048 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.456 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.device.read.bytes volume: 30153728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.456 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.457 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.457 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7ff8501e1d00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.457 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.458 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84dc55100>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.458 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84dc55100>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.458 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.458 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-01T09:50:20.458294) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.474 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk.device.allocation volume: 204800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.474 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.488 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.device.allocation volume: 30154752 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.489 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.489 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.490 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7ff84c98b110>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.490 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.490 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b140>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.490 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b140>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.490 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.490 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk.device.read.latency volume: 401632463 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.491 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk.device.read.latency volume: 72993382 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.491 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-01T09:50:20.490609) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.491 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.device.read.latency volume: 537631881 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.492 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.device.read.latency volume: 54970899 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.492 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.492 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7ff84c98b1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.492 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.493 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.493 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.493 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.493 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk.device.usage volume: 196624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.493 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-01T09:50:20.493249) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.493 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.494 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.device.usage volume: 29884416 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.494 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.495 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.495 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7ff84c98b200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.496 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.496 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.496 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.496 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.496 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.496 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.497 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-01T09:50:20.496350) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.497 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.device.write.bytes volume: 72884224 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.497 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.498 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.498 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7ff84ca1c230>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.498 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.498 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84ca1c260>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.498 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84ca1c260>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.498 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.499 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-01T09:50:20.498733) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.521 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.539 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.539 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.539 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7ff84c98b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.539 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.539 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.539 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.540 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.540 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.540 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.540 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.device.write.latency volume: 3026166253 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.540 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.541 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.541 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7ff84c98b2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.541 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.541 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.541 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.541 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.542 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.542 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.542 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.device.write.requests volume: 312 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.542 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.542 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-01T09:50:20.540079) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.543 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.543 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7ff84c98b620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.543 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.543 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84ff01af0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.543 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84ff01af0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.543 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.543 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-01T09:50:20.541882) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.544 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-01T09:50:20.543710) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.546 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2 / tap01cbdc1d-a8 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.547 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/network.incoming.bytes volume: 90 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.550 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/network.incoming.bytes volume: 1436 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.550 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.551 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7ff84c98b680>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.551 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.551 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bb30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.551 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bb30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.551 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.551 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.551 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: te-8664732-asg-zzzrimsgcaeu-wsvolr2mhgm2-s6bg7htmycz5>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: te-8664732-asg-zzzrimsgcaeu-wsvolr2mhgm2-s6bg7htmycz5>]
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.552 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7ff84c98b320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.552 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.552 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.552 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.552 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.553 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.553 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7ff84c98b920>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.553 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.553 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bb60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.553 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bb60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.553 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.553 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/network.incoming.packets volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.554 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/network.incoming.packets volume: 11 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.554 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.554 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7ff84c98b380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.554 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.554 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b3b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.554 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b3b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.555 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.555 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.555 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7ff84c98bb90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.555 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.556 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bbc0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.556 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bbc0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.556 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.556 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.556 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.556 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-12-01T09:50:20.551494) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.557 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.557 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7ff84c98bbf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.557 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.557 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bc20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.557 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-01T09:50:20.552603) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.557 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bc20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.557 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.558 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-01T09:50:20.553668) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.558 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.558 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-01T09:50:20.555048) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.558 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.558 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-01T09:50:20.556268) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.558 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.558 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7ff84c98bc80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.558 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.558 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bcb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.559 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bcb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.559 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-01T09:50:20.557736) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.559 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.559 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/network.outgoing.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.559 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-01T09:50:20.559180) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.559 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.560 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.560 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7ff84c98bd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.560 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.560 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bd40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.560 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bd40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.560 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.560 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.560 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.561 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.561 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7ff84c98bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.561 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.561 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bdd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.561 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bdd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.562 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.562 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.562 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-01T09:50:20.560536) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.562 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: te-8664732-asg-zzzrimsgcaeu-wsvolr2mhgm2-s6bg7htmycz5>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: te-8664732-asg-zzzrimsgcaeu-wsvolr2mhgm2-s6bg7htmycz5>]
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.562 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7ff84c98b5c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.562 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.562 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b5f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.562 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b5f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.563 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.563 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/memory.usage volume: Unavailable _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.563 14 WARNING ceilometer.compute.pollsters [-] memory.usage statistic in not available for instance be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2: ceilometer.compute.pollsters.NoVolumeException
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.563 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/memory.usage volume: 43.40625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.562 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-12-01T09:50:20.562030) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.563 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.563 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7ff84dc55040>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.564 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.564 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b650>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.564 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b650>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.564 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.564 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.564 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-01T09:50:20.563109) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.564 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.565 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-01T09:50:20.564346) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.565 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.565 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7ff84c98be30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.565 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.565 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98be60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.565 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98be60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.565 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.565 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/network.outgoing.packets volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.566 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.566 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.566 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7ff8503b1b80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.566 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.566 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84f216690>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.567 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84f216690>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.567 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.567 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/cpu volume: 12970000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.567 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/cpu volume: 310890000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.566 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-01T09:50:20.565639) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.567 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.568 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7ff84dab3f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.568 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.568 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c9896d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.568 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c9896d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.568 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.568 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.568 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.569 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.569 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.569 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.569 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7ff84c98bec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.570 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.570 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bef0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.570 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bef0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.570 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.570 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.570 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-01T09:50:20.567089) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.571 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.571 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-01T09:50:20.568424) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.571 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.571 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7ff84c98b170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.571 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.571 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98a720>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.571 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98a720>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.571 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.572 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk.device.read.requests volume: 760 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.572 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk.device.read.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.572 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.device.read.requests volume: 1094 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.572 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.573 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.573 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7ff84c98bf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.573 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.573 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bf80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.573 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bf80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.574 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.574 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.574 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-01T09:50:20.570578) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.574 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.574 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-01T09:50:20.571911) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.574 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.575 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.575 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.575 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.576 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.576 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.576 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.576 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.577 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.577 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.577 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.577 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.578 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.578 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.578 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.578 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.578 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.579 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.579 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.579 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.579 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-01T09:50:20.574032) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.580 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.580 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.580 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.580 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.581 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.581 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:50:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:50:20.581 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:50:21 compute-0 podman[256803]: 2025-12-01 09:50:21.700236716 +0000 UTC m=+0.076754616 container health_status 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Dec  1 09:50:21 compute-0 podman[256804]: 2025-12-01 09:50:21.735268751 +0000 UTC m=+0.108043519 container health_status 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec  1 09:50:23 compute-0 nova_compute[189491]: 2025-12-01 09:50:23.334 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:50:25 compute-0 nova_compute[189491]: 2025-12-01 09:50:25.078 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:50:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:50:26.542 106659 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:50:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:50:26.544 106659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:50:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:50:26.544 106659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:50:28 compute-0 nova_compute[189491]: 2025-12-01 09:50:28.338 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:50:29 compute-0 podman[203700]: time="2025-12-01T09:50:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 09:50:29 compute-0 podman[203700]: @ - - [01/Dec/2025:09:50:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Dec  1 09:50:29 compute-0 podman[203700]: @ - - [01/Dec/2025:09:50:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4817 "" "Go-http-client/1.1"
Dec  1 09:50:30 compute-0 nova_compute[189491]: 2025-12-01 09:50:30.089 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:50:31 compute-0 openstack_network_exporter[205866]: ERROR   09:50:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 09:50:31 compute-0 openstack_network_exporter[205866]: ERROR   09:50:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:50:31 compute-0 openstack_network_exporter[205866]: ERROR   09:50:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:50:31 compute-0 openstack_network_exporter[205866]: ERROR   09:50:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 09:50:31 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:50:31 compute-0 openstack_network_exporter[205866]: ERROR   09:50:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 09:50:31 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:50:31 compute-0 podman[256845]: 2025-12-01 09:50:31.73525246 +0000 UTC m=+0.095646116 container health_status 6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  1 09:50:31 compute-0 podman[256846]: 2025-12-01 09:50:31.734852261 +0000 UTC m=+0.095650137 container health_status ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  1 09:50:33 compute-0 nova_compute[189491]: 2025-12-01 09:50:33.339 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:50:35 compute-0 nova_compute[189491]: 2025-12-01 09:50:35.093 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:50:35 compute-0 ovn_controller[97794]: 2025-12-01T09:50:35Z|00179|memory_trim|INFO|Detected inactivity (last active 30005 ms ago): trimming memory
Dec  1 09:50:38 compute-0 nova_compute[189491]: 2025-12-01 09:50:38.341 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:50:40 compute-0 nova_compute[189491]: 2025-12-01 09:50:40.097 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:50:41 compute-0 podman[256901]: 2025-12-01 09:50:41.696634036 +0000 UTC m=+0.066602747 container health_status dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  1 09:50:41 compute-0 podman[256903]: 2025-12-01 09:50:41.721314538 +0000 UTC m=+0.082572377 container health_status f2d8639519b361376780abb83cb5a5e341c4f412720fb5852b2a4d261a69c359 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, release=1214.1726694543, container_name=kepler, name=ubi9, io.buildah.version=1.29.0, release-0.7.12=, architecture=x86_64, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, managed_by=edpm_ansible, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc.)
Dec  1 09:50:41 compute-0 podman[256902]: 2025-12-01 09:50:41.729739374 +0000 UTC m=+0.084013872 container health_status e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec  1 09:50:42 compute-0 nova_compute[189491]: 2025-12-01 09:50:42.714 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:50:42 compute-0 nova_compute[189491]: 2025-12-01 09:50:42.715 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 09:50:42 compute-0 nova_compute[189491]: 2025-12-01 09:50:42.716 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 09:50:43 compute-0 nova_compute[189491]: 2025-12-01 09:50:43.009 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "refresh_cache-dc0d510c-4baf-4bcb-ab4f-de6ee48849c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 09:50:43 compute-0 nova_compute[189491]: 2025-12-01 09:50:43.010 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquired lock "refresh_cache-dc0d510c-4baf-4bcb-ab4f-de6ee48849c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 09:50:43 compute-0 nova_compute[189491]: 2025-12-01 09:50:43.011 189495 DEBUG nova.network.neutron [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] [instance: dc0d510c-4baf-4bcb-ab4f-de6ee48849c0] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  1 09:50:43 compute-0 nova_compute[189491]: 2025-12-01 09:50:43.011 189495 DEBUG nova.objects.instance [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lazy-loading 'info_cache' on Instance uuid dc0d510c-4baf-4bcb-ab4f-de6ee48849c0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 09:50:43 compute-0 ovn_controller[97794]: 2025-12-01T09:50:43Z|00025|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:37:35:95 10.100.3.35
Dec  1 09:50:43 compute-0 ovn_controller[97794]: 2025-12-01T09:50:43Z|00026|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:37:35:95 10.100.3.35
Dec  1 09:50:43 compute-0 nova_compute[189491]: 2025-12-01 09:50:43.343 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:50:45 compute-0 nova_compute[189491]: 2025-12-01 09:50:45.035 189495 DEBUG nova.network.neutron [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] [instance: dc0d510c-4baf-4bcb-ab4f-de6ee48849c0] Updating instance_info_cache with network_info: [{"id": "e1536dee-e9fa-499f-9e7a-2b2a0ecce586", "address": "fa:16:3e:50:a8:e2", "network": {"id": "cf0577af-a5ed-496f-aa24-ae4d86898e85", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.156", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d5294cc5ac64b22a4a0f770b8d8bc61", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1536dee-e9", "ovs_interfaceid": "e1536dee-e9fa-499f-9e7a-2b2a0ecce586", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 09:50:45 compute-0 nova_compute[189491]: 2025-12-01 09:50:45.055 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Releasing lock "refresh_cache-dc0d510c-4baf-4bcb-ab4f-de6ee48849c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 09:50:45 compute-0 nova_compute[189491]: 2025-12-01 09:50:45.056 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] [instance: dc0d510c-4baf-4bcb-ab4f-de6ee48849c0] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  1 09:50:45 compute-0 nova_compute[189491]: 2025-12-01 09:50:45.102 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:50:48 compute-0 nova_compute[189491]: 2025-12-01 09:50:48.347 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:50:48 compute-0 podman[256963]: 2025-12-01 09:50:48.709882422 +0000 UTC m=+0.076266224 container health_status f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 09:50:48 compute-0 podman[256962]: 2025-12-01 09:50:48.720111872 +0000 UTC m=+0.091438844 container health_status 110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, config_id=edpm, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, name=ubi9-minimal, architecture=x86_64)
Dec  1 09:50:50 compute-0 nova_compute[189491]: 2025-12-01 09:50:50.105 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:50:52 compute-0 podman[257000]: 2025-12-01 09:50:52.694153743 +0000 UTC m=+0.070224576 container health_status 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, org.label-schema.build-date=20251125, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Dec  1 09:50:52 compute-0 nova_compute[189491]: 2025-12-01 09:50:52.713 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:50:52 compute-0 nova_compute[189491]: 2025-12-01 09:50:52.714 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:50:52 compute-0 nova_compute[189491]: 2025-12-01 09:50:52.714 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:50:52 compute-0 podman[257001]: 2025-12-01 09:50:52.780315747 +0000 UTC m=+0.151245624 container health_status 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec  1 09:50:52 compute-0 nova_compute[189491]: 2025-12-01 09:50:52.790 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:50:52 compute-0 nova_compute[189491]: 2025-12-01 09:50:52.791 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:50:52 compute-0 nova_compute[189491]: 2025-12-01 09:50:52.791 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:50:52 compute-0 nova_compute[189491]: 2025-12-01 09:50:52.791 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 09:50:52 compute-0 nova_compute[189491]: 2025-12-01 09:50:52.955 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:50:53 compute-0 nova_compute[189491]: 2025-12-01 09:50:53.027 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:50:53 compute-0 nova_compute[189491]: 2025-12-01 09:50:53.028 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:50:53 compute-0 nova_compute[189491]: 2025-12-01 09:50:53.090 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:50:53 compute-0 nova_compute[189491]: 2025-12-01 09:50:53.099 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:50:53 compute-0 nova_compute[189491]: 2025-12-01 09:50:53.166 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:50:53 compute-0 nova_compute[189491]: 2025-12-01 09:50:53.168 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:50:53 compute-0 nova_compute[189491]: 2025-12-01 09:50:53.230 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:50:53 compute-0 nova_compute[189491]: 2025-12-01 09:50:53.350 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:50:53 compute-0 nova_compute[189491]: 2025-12-01 09:50:53.588 189495 WARNING nova.virt.libvirt.driver [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 09:50:53 compute-0 nova_compute[189491]: 2025-12-01 09:50:53.589 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4921MB free_disk=72.24847793579102GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 09:50:53 compute-0 nova_compute[189491]: 2025-12-01 09:50:53.590 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:50:53 compute-0 nova_compute[189491]: 2025-12-01 09:50:53.591 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:50:53 compute-0 nova_compute[189491]: 2025-12-01 09:50:53.681 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Instance dc0d510c-4baf-4bcb-ab4f-de6ee48849c0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 09:50:53 compute-0 nova_compute[189491]: 2025-12-01 09:50:53.682 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Instance be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 09:50:53 compute-0 nova_compute[189491]: 2025-12-01 09:50:53.683 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 09:50:53 compute-0 nova_compute[189491]: 2025-12-01 09:50:53.683 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 09:50:53 compute-0 nova_compute[189491]: 2025-12-01 09:50:53.747 189495 DEBUG nova.compute.provider_tree [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Inventory has not changed in ProviderTree for provider: 143c7fe7-af1f-477a-978c-6a994d785d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 09:50:53 compute-0 nova_compute[189491]: 2025-12-01 09:50:53.774 189495 DEBUG nova.scheduler.client.report [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Inventory has not changed for provider 143c7fe7-af1f-477a-978c-6a994d785d98 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 09:50:53 compute-0 nova_compute[189491]: 2025-12-01 09:50:53.797 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 09:50:53 compute-0 nova_compute[189491]: 2025-12-01 09:50:53.798 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.207s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:50:55 compute-0 nova_compute[189491]: 2025-12-01 09:50:55.109 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:50:56 compute-0 nova_compute[189491]: 2025-12-01 09:50:56.798 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:50:56 compute-0 nova_compute[189491]: 2025-12-01 09:50:56.800 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:50:56 compute-0 nova_compute[189491]: 2025-12-01 09:50:56.800 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 09:50:58 compute-0 nova_compute[189491]: 2025-12-01 09:50:58.352 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:50:59 compute-0 nova_compute[189491]: 2025-12-01 09:50:59.714 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:50:59 compute-0 nova_compute[189491]: 2025-12-01 09:50:59.715 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:50:59 compute-0 podman[203700]: time="2025-12-01T09:50:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 09:50:59 compute-0 podman[203700]: @ - - [01/Dec/2025:09:50:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Dec  1 09:50:59 compute-0 podman[203700]: @ - - [01/Dec/2025:09:50:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4816 "" "Go-http-client/1.1"
Dec  1 09:51:00 compute-0 nova_compute[189491]: 2025-12-01 09:51:00.116 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:51:01 compute-0 openstack_network_exporter[205866]: ERROR   09:51:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 09:51:01 compute-0 openstack_network_exporter[205866]: ERROR   09:51:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:51:01 compute-0 openstack_network_exporter[205866]: ERROR   09:51:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:51:01 compute-0 openstack_network_exporter[205866]: ERROR   09:51:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 09:51:01 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:51:01 compute-0 openstack_network_exporter[205866]: ERROR   09:51:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 09:51:01 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:51:01 compute-0 nova_compute[189491]: 2025-12-01 09:51:01.714 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:51:02 compute-0 podman[257054]: 2025-12-01 09:51:02.693845004 +0000 UTC m=+0.067889418 container health_status 6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  1 09:51:02 compute-0 podman[257055]: 2025-12-01 09:51:02.698253642 +0000 UTC m=+0.069697432 container health_status ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec  1 09:51:03 compute-0 nova_compute[189491]: 2025-12-01 09:51:03.352 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:51:05 compute-0 nova_compute[189491]: 2025-12-01 09:51:05.121 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:51:08 compute-0 nova_compute[189491]: 2025-12-01 09:51:08.356 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:51:10 compute-0 nova_compute[189491]: 2025-12-01 09:51:10.126 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:51:12 compute-0 podman[257101]: 2025-12-01 09:51:12.716754113 +0000 UTC m=+0.079925943 container health_status e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 09:51:12 compute-0 podman[257102]: 2025-12-01 09:51:12.734452274 +0000 UTC m=+0.090736655 container health_status f2d8639519b361376780abb83cb5a5e341c4f412720fb5852b2a4d261a69c359 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, vcs-type=git, version=9.4, maintainer=Red Hat, Inc., container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.buildah.version=1.29.0, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.tags=base rhel9)
Dec  1 09:51:12 compute-0 podman[257100]: 2025-12-01 09:51:12.752883105 +0000 UTC m=+0.116825964 container health_status dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  1 09:51:13 compute-0 nova_compute[189491]: 2025-12-01 09:51:13.357 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:51:15 compute-0 nova_compute[189491]: 2025-12-01 09:51:15.129 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:51:18 compute-0 nova_compute[189491]: 2025-12-01 09:51:18.359 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:51:19 compute-0 podman[257163]: 2025-12-01 09:51:19.689878118 +0000 UTC m=+0.064024574 container health_status f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Dec  1 09:51:19 compute-0 podman[257162]: 2025-12-01 09:51:19.727929638 +0000 UTC m=+0.106802729 container health_status 110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, container_name=openstack_network_exporter, name=ubi9-minimal, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, config_id=edpm, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, distribution-scope=public, managed_by=edpm_ansible)
Dec  1 09:51:20 compute-0 nova_compute[189491]: 2025-12-01 09:51:20.133 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:51:23 compute-0 nova_compute[189491]: 2025-12-01 09:51:23.361 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:51:23 compute-0 podman[257197]: 2025-12-01 09:51:23.699281454 +0000 UTC m=+0.074335486 container health_status 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  1 09:51:23 compute-0 podman[257198]: 2025-12-01 09:51:23.757590437 +0000 UTC m=+0.127284569 container health_status 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  1 09:51:25 compute-0 nova_compute[189491]: 2025-12-01 09:51:25.137 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:51:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:51:26.544 106659 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:51:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:51:26.547 106659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:51:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:51:26.548 106659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:51:28 compute-0 nova_compute[189491]: 2025-12-01 09:51:28.362 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:51:29 compute-0 podman[203700]: time="2025-12-01T09:51:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 09:51:29 compute-0 podman[203700]: @ - - [01/Dec/2025:09:51:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Dec  1 09:51:29 compute-0 podman[203700]: @ - - [01/Dec/2025:09:51:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4813 "" "Go-http-client/1.1"
Dec  1 09:51:30 compute-0 nova_compute[189491]: 2025-12-01 09:51:30.140 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:51:31 compute-0 openstack_network_exporter[205866]: ERROR   09:51:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:51:31 compute-0 openstack_network_exporter[205866]: ERROR   09:51:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:51:31 compute-0 openstack_network_exporter[205866]: ERROR   09:51:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 09:51:31 compute-0 openstack_network_exporter[205866]: ERROR   09:51:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 09:51:31 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:51:31 compute-0 openstack_network_exporter[205866]: ERROR   09:51:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 09:51:31 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:51:33 compute-0 nova_compute[189491]: 2025-12-01 09:51:33.366 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:51:33 compute-0 podman[257242]: 2025-12-01 09:51:33.695596662 +0000 UTC m=+0.068378780 container health_status 6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  1 09:51:33 compute-0 podman[257243]: 2025-12-01 09:51:33.722044638 +0000 UTC m=+0.091823442 container health_status ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm)
Dec  1 09:51:35 compute-0 nova_compute[189491]: 2025-12-01 09:51:35.145 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:51:38 compute-0 nova_compute[189491]: 2025-12-01 09:51:38.369 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:51:40 compute-0 nova_compute[189491]: 2025-12-01 09:51:40.149 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:51:43 compute-0 nova_compute[189491]: 2025-12-01 09:51:43.370 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:51:43 compute-0 podman[257296]: 2025-12-01 09:51:43.730412691 +0000 UTC m=+0.088872470 container health_status f2d8639519b361376780abb83cb5a5e341c4f412720fb5852b2a4d261a69c359 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, name=ubi9, io.k8s.display-name=Red Hat Universal Base Image 9, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, config_id=edpm, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., managed_by=edpm_ansible, version=9.4, architecture=x86_64, container_name=kepler, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec  1 09:51:43 compute-0 podman[257295]: 2025-12-01 09:51:43.730743219 +0000 UTC m=+0.089201378 container health_status e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  1 09:51:43 compute-0 podman[257294]: 2025-12-01 09:51:43.734339738 +0000 UTC m=+0.101445958 container health_status dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  1 09:51:44 compute-0 nova_compute[189491]: 2025-12-01 09:51:44.715 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:51:44 compute-0 nova_compute[189491]: 2025-12-01 09:51:44.716 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 09:51:45 compute-0 nova_compute[189491]: 2025-12-01 09:51:45.092 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "refresh_cache-be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 09:51:45 compute-0 nova_compute[189491]: 2025-12-01 09:51:45.092 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquired lock "refresh_cache-be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 09:51:45 compute-0 nova_compute[189491]: 2025-12-01 09:51:45.093 189495 DEBUG nova.network.neutron [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] [instance: be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  1 09:51:45 compute-0 nova_compute[189491]: 2025-12-01 09:51:45.157 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:51:46 compute-0 nova_compute[189491]: 2025-12-01 09:51:46.351 189495 DEBUG nova.network.neutron [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] [instance: be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2] Updating instance_info_cache with network_info: [{"id": "01cbdc1d-a86f-411f-a8e1-8a4166f063d3", "address": "fa:16:3e:37:35:95", "network": {"id": "cf0577af-a5ed-496f-aa24-ae4d86898e85", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.35", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d5294cc5ac64b22a4a0f770b8d8bc61", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01cbdc1d-a8", "ovs_interfaceid": "01cbdc1d-a86f-411f-a8e1-8a4166f063d3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 09:51:46 compute-0 nova_compute[189491]: 2025-12-01 09:51:46.376 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Releasing lock "refresh_cache-be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 09:51:46 compute-0 nova_compute[189491]: 2025-12-01 09:51:46.377 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] [instance: be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  1 09:51:48 compute-0 nova_compute[189491]: 2025-12-01 09:51:48.373 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:51:50 compute-0 nova_compute[189491]: 2025-12-01 09:51:50.161 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:51:50 compute-0 podman[257352]: 2025-12-01 09:51:50.700338559 +0000 UTC m=+0.077810271 container health_status 110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., version=9.6, vendor=Red Hat, Inc., architecture=x86_64, container_name=openstack_network_exporter, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, io.buildah.version=1.33.7, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec  1 09:51:50 compute-0 podman[257353]: 2025-12-01 09:51:50.721665699 +0000 UTC m=+0.093200766 container health_status f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec  1 09:51:52 compute-0 nova_compute[189491]: 2025-12-01 09:51:52.713 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:51:52 compute-0 nova_compute[189491]: 2025-12-01 09:51:52.714 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:51:52 compute-0 nova_compute[189491]: 2025-12-01 09:51:52.741 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:51:52 compute-0 nova_compute[189491]: 2025-12-01 09:51:52.741 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:51:52 compute-0 nova_compute[189491]: 2025-12-01 09:51:52.742 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:51:52 compute-0 nova_compute[189491]: 2025-12-01 09:51:52.742 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 09:51:52 compute-0 nova_compute[189491]: 2025-12-01 09:51:52.818 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:51:52 compute-0 nova_compute[189491]: 2025-12-01 09:51:52.882 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:51:52 compute-0 nova_compute[189491]: 2025-12-01 09:51:52.884 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:51:52 compute-0 nova_compute[189491]: 2025-12-01 09:51:52.945 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:51:52 compute-0 nova_compute[189491]: 2025-12-01 09:51:52.953 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:51:53 compute-0 nova_compute[189491]: 2025-12-01 09:51:53.023 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:51:53 compute-0 nova_compute[189491]: 2025-12-01 09:51:53.024 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:51:53 compute-0 nova_compute[189491]: 2025-12-01 09:51:53.090 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:51:53 compute-0 nova_compute[189491]: 2025-12-01 09:51:53.374 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:51:53 compute-0 nova_compute[189491]: 2025-12-01 09:51:53.473 189495 WARNING nova.virt.libvirt.driver [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 09:51:53 compute-0 nova_compute[189491]: 2025-12-01 09:51:53.474 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4903MB free_disk=72.24847793579102GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 09:51:53 compute-0 nova_compute[189491]: 2025-12-01 09:51:53.475 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:51:53 compute-0 nova_compute[189491]: 2025-12-01 09:51:53.476 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:51:53 compute-0 nova_compute[189491]: 2025-12-01 09:51:53.650 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Instance dc0d510c-4baf-4bcb-ab4f-de6ee48849c0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 09:51:53 compute-0 nova_compute[189491]: 2025-12-01 09:51:53.651 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Instance be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 09:51:53 compute-0 nova_compute[189491]: 2025-12-01 09:51:53.652 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 09:51:53 compute-0 nova_compute[189491]: 2025-12-01 09:51:53.652 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 09:51:53 compute-0 nova_compute[189491]: 2025-12-01 09:51:53.716 189495 DEBUG nova.compute.provider_tree [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Inventory has not changed in ProviderTree for provider: 143c7fe7-af1f-477a-978c-6a994d785d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 09:51:53 compute-0 nova_compute[189491]: 2025-12-01 09:51:53.747 189495 DEBUG nova.scheduler.client.report [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Inventory has not changed for provider 143c7fe7-af1f-477a-978c-6a994d785d98 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 09:51:53 compute-0 nova_compute[189491]: 2025-12-01 09:51:53.749 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 09:51:53 compute-0 nova_compute[189491]: 2025-12-01 09:51:53.750 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.275s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:51:54 compute-0 podman[257401]: 2025-12-01 09:51:54.703444329 +0000 UTC m=+0.075034692 container health_status 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team)
Dec  1 09:51:54 compute-0 podman[257402]: 2025-12-01 09:51:54.760580805 +0000 UTC m=+0.125772132 container health_status 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec  1 09:51:55 compute-0 nova_compute[189491]: 2025-12-01 09:51:55.164 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:51:55 compute-0 nova_compute[189491]: 2025-12-01 09:51:55.745 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:51:56 compute-0 nova_compute[189491]: 2025-12-01 09:51:56.713 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:51:57 compute-0 nova_compute[189491]: 2025-12-01 09:51:57.714 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:51:57 compute-0 nova_compute[189491]: 2025-12-01 09:51:57.714 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 09:51:58 compute-0 nova_compute[189491]: 2025-12-01 09:51:58.378 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:51:59 compute-0 nova_compute[189491]: 2025-12-01 09:51:59.714 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:51:59 compute-0 podman[203700]: time="2025-12-01T09:51:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 09:51:59 compute-0 podman[203700]: @ - - [01/Dec/2025:09:51:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Dec  1 09:51:59 compute-0 podman[203700]: @ - - [01/Dec/2025:09:51:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4810 "" "Go-http-client/1.1"
Dec  1 09:52:00 compute-0 nova_compute[189491]: 2025-12-01 09:52:00.167 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:52:01 compute-0 openstack_network_exporter[205866]: ERROR   09:52:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:52:01 compute-0 openstack_network_exporter[205866]: ERROR   09:52:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:52:01 compute-0 openstack_network_exporter[205866]: ERROR   09:52:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 09:52:01 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:52:01 compute-0 openstack_network_exporter[205866]: ERROR   09:52:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 09:52:01 compute-0 openstack_network_exporter[205866]: ERROR   09:52:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 09:52:01 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:52:01 compute-0 nova_compute[189491]: 2025-12-01 09:52:01.715 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:52:02 compute-0 nova_compute[189491]: 2025-12-01 09:52:02.713 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:52:03 compute-0 nova_compute[189491]: 2025-12-01 09:52:03.380 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:52:04 compute-0 podman[257444]: 2025-12-01 09:52:04.700051036 +0000 UTC m=+0.066905325 container health_status 6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  1 09:52:04 compute-0 nova_compute[189491]: 2025-12-01 09:52:04.709 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:52:04 compute-0 podman[257445]: 2025-12-01 09:52:04.737546022 +0000 UTC m=+0.102216867 container health_status ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible)
Dec  1 09:52:05 compute-0 nova_compute[189491]: 2025-12-01 09:52:05.171 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:52:08 compute-0 nova_compute[189491]: 2025-12-01 09:52:08.382 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:52:09 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Dec  1 09:52:10 compute-0 nova_compute[189491]: 2025-12-01 09:52:10.175 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:52:13 compute-0 nova_compute[189491]: 2025-12-01 09:52:13.384 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:52:14 compute-0 podman[257491]: 2025-12-01 09:52:14.713138705 +0000 UTC m=+0.084690050 container health_status e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=edpm, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec  1 09:52:14 compute-0 podman[257490]: 2025-12-01 09:52:14.722669007 +0000 UTC m=+0.095886462 container health_status dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  1 09:52:14 compute-0 podman[257492]: 2025-12-01 09:52:14.734000583 +0000 UTC m=+0.095858651 container health_status f2d8639519b361376780abb83cb5a5e341c4f412720fb5852b2a4d261a69c359 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, version=9.4, distribution-scope=public, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, config_id=edpm, name=ubi9, container_name=kepler, architecture=x86_64, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.buildah.version=1.29.0)
Dec  1 09:52:15 compute-0 nova_compute[189491]: 2025-12-01 09:52:15.179 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:52:18 compute-0 nova_compute[189491]: 2025-12-01 09:52:18.387 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.796 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.797 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.797 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b020>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.798 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7ff84c98b0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.798 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84dc55100>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.799 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.799 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.799 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.799 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84ca1c260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.800 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.800 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.800 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84ff01af0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.800 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bb30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.801 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.801 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bb60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.801 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.801 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bbc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.802 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.802 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.802 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.802 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.803 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b5f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.803 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.803 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98be60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.804 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84f216690>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.804 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c9896d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.804 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.804 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98a720>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.805 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c3bdd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.804 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2', 'name': 'te-8664732-asg-zzzrimsgcaeu-wsvolr2mhgm2-s6bg7htmycz5', 'flavor': {'id': '422f041c-a187-4aa2-8167-37f3eb0e89c2', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '280f4e4d-4a12-4164-a687-6106a9afc7fe'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000f', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '6d5294cc5ac64b22a4a0f770b8d8bc61', 'user_id': 'c54f3a4a232b4a739be88e97f2094d4f', 'hostId': 'b9c6fdac1e98b24aca6852a4c44644f8d936ac2e3843f1f4b4c15406', 'status': 'active', 'metadata': {'metering.server_group': 'e03937ad-4d2d-4edc-9b33-ed8d878566ca'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.808 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'dc0d510c-4baf-4bcb-ab4f-de6ee48849c0', 'name': 'te-8664732-asg-zzzrimsgcaeu-gnecnnuukmep-lujrpewlzjs2', 'flavor': {'id': '422f041c-a187-4aa2-8167-37f3eb0e89c2', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '280f4e4d-4a12-4164-a687-6106a9afc7fe'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000b', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '6d5294cc5ac64b22a4a0f770b8d8bc61', 'user_id': 'c54f3a4a232b4a739be88e97f2094d4f', 'hostId': 'b9c6fdac1e98b24aca6852a4c44644f8d936ac2e3843f1f4b4c15406', 'status': 'active', 'metadata': {'metering.server_group': 'e03937ad-4d2d-4edc-9b33-ed8d878566ca'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.808 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.808 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b020>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.808 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b020>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.809 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.809 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-01T09:52:19.809043) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.844 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk.device.read.bytes volume: 30145536 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.845 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.885 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.device.read.bytes volume: 31078912 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.886 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.887 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.887 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7ff8501e1d00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.887 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.887 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84dc55100>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.887 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84dc55100>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.887 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.888 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-01T09:52:19.887488) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.901 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk.device.allocation volume: 30154752 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.901 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.915 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.device.allocation volume: 30154752 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.915 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.916 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.916 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7ff84c98b110>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.916 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.916 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b140>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.916 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b140>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.916 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.917 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk.device.read.latency volume: 537383683 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.917 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk.device.read.latency volume: 120965921 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.917 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.device.read.latency volume: 558901098 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.917 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.device.read.latency volume: 60948895 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.918 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.918 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7ff84c98b1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.918 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.918 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.918 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.918 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.918 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk.device.usage volume: 29884416 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.919 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.919 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.device.usage volume: 30081024 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.919 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.919 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-01T09:52:19.916908) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.919 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.920 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7ff84c98b200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.920 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.920 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.920 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.920 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.920 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk.device.write.bytes volume: 72884224 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.920 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.921 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.device.write.bytes volume: 73191424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.921 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.921 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.921 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7ff84ca1c230>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.921 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.921 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84ca1c260>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.922 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84ca1c260>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.922 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.923 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-01T09:52:19.918732) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.923 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-01T09:52:19.920517) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.924 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-01T09:52:19.922117) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.940 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.958 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.959 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.959 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7ff84c98b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.959 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.959 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.959 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.960 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.960 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk.device.write.latency volume: 2420440038 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.960 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.960 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.device.write.latency volume: 3075326058 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.960 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.961 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.961 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7ff84c98b2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.961 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.961 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.961 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.961 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.961 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk.device.write.requests volume: 310 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.961 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.962 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.device.write.requests volume: 337 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.962 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.962 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.963 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7ff84c98b620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.963 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.963 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84ff01af0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.963 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84ff01af0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.963 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.963 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-01T09:52:19.959922) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.964 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-01T09:52:19.961662) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.964 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-01T09:52:19.963400) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.967 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/network.incoming.bytes volume: 1976 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.970 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/network.incoming.bytes volume: 1520 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.970 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.970 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7ff84c98b680>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.970 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.971 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7ff84c98b320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.971 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.971 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.971 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.971 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.971 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.971 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7ff84c98b920>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.972 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.972 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bb60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.972 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bb60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.972 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.972 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.972 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/network.incoming.packets volume: 13 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.972 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-01T09:52:19.971347) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.973 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.973 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7ff84c98b380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.973 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.973 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b3b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.973 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b3b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.973 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.973 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-01T09:52:19.972602) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.974 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.974 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7ff84c98bb90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.974 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.974 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bbc0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.974 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bbc0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.974 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.974 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.975 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.975 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.975 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7ff84c98bbf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.975 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.975 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bc20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.975 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bc20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.975 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-01T09:52:19.973832) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.976 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-01T09:52:19.974719) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.975 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.976 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.976 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-01T09:52:19.975888) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.976 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.977 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.977 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7ff84c98bc80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.977 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.977 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bcb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.977 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bcb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.977 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.977 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.978 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.978 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.978 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7ff84c98bd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.978 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.978 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bd40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.978 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bd40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.978 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.978 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/network.outgoing.bytes.delta volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.979 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/network.outgoing.bytes.delta volume: 630 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.979 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.979 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7ff84c98bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.979 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.979 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7ff84c98b5c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.979 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.979 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b5f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.979 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b5f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.980 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.980 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/memory.usage volume: 43.69140625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.980 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/memory.usage volume: 42.47265625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.980 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.980 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7ff84dc55040>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.980 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.980 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b650>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.981 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b650>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.981 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.981 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/network.incoming.bytes.delta volume: 1886 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.980 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-01T09:52:19.977676) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.981 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.981 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-01T09:52:19.978761) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.981 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.982 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7ff84c98be30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.982 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.982 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98be60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.982 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98be60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.982 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.982 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.982 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-01T09:52:19.980000) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.982 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.983 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.983 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7ff8503b1b80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.983 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.983 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84f216690>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.983 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84f216690>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.983 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.983 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/cpu volume: 130910000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.983 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-01T09:52:19.981190) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.984 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/cpu volume: 335810000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.984 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-01T09:52:19.982465) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.984 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.984 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7ff84dab3f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.984 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.984 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c9896d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.984 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c9896d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.984 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.984 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.985 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.985 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.985 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.985 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.986 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7ff84c98bec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.986 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.986 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bef0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.986 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bef0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.986 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.986 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.986 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.987 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.987 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7ff84c98b170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.987 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.987 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98a720>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.987 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98a720>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.987 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.987 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk.device.read.requests volume: 1092 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.987 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-01T09:52:19.983823) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.987 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.988 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.device.read.requests volume: 1138 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.988 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.988 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.988 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7ff84c98bf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.988 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.989 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bf80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.989 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bf80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.989 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.989 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.989 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.989 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.990 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.990 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.990 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.991 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.991 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.991 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.991 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-01T09:52:19.984837) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.991 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.992 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-01T09:52:19.986404) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.992 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.992 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-01T09:52:19.987639) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.992 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.992 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-01T09:52:19.989199) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.993 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.993 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.993 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.993 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.993 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.994 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.994 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.994 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.994 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.994 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.995 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.995 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.995 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.995 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.995 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.996 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:52:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:52:19.996 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:52:20 compute-0 nova_compute[189491]: 2025-12-01 09:52:20.183 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:52:21 compute-0 podman[257549]: 2025-12-01 09:52:21.700922918 +0000 UTC m=+0.073449504 container health_status f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent)
Dec  1 09:52:21 compute-0 podman[257548]: 2025-12-01 09:52:21.703726146 +0000 UTC m=+0.075848783 container health_status 110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, io.openshift.tags=minimal rhel9, config_id=edpm, managed_by=edpm_ansible, name=ubi9-minimal, container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.openshift.expose-services=, distribution-scope=public, com.redhat.component=ubi9-minimal-container, version=9.6, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec  1 09:52:23 compute-0 nova_compute[189491]: 2025-12-01 09:52:23.390 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:52:25 compute-0 nova_compute[189491]: 2025-12-01 09:52:25.188 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:52:25 compute-0 podman[257586]: 2025-12-01 09:52:25.727180545 +0000 UTC m=+0.095636156 container health_status 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=multipathd)
Dec  1 09:52:25 compute-0 podman[257587]: 2025-12-01 09:52:25.751125389 +0000 UTC m=+0.114062065 container health_status 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2)
Dec  1 09:52:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:52:26.545 106659 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:52:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:52:26.545 106659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:52:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:52:26.545 106659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:52:28 compute-0 nova_compute[189491]: 2025-12-01 09:52:28.391 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:52:29 compute-0 podman[203700]: time="2025-12-01T09:52:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 09:52:29 compute-0 podman[203700]: @ - - [01/Dec/2025:09:52:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Dec  1 09:52:29 compute-0 podman[203700]: @ - - [01/Dec/2025:09:52:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4814 "" "Go-http-client/1.1"
Dec  1 09:52:30 compute-0 nova_compute[189491]: 2025-12-01 09:52:30.194 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:52:31 compute-0 openstack_network_exporter[205866]: ERROR   09:52:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 09:52:31 compute-0 openstack_network_exporter[205866]: ERROR   09:52:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:52:31 compute-0 openstack_network_exporter[205866]: ERROR   09:52:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:52:31 compute-0 openstack_network_exporter[205866]: ERROR   09:52:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 09:52:31 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:52:31 compute-0 openstack_network_exporter[205866]: ERROR   09:52:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 09:52:31 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:52:33 compute-0 nova_compute[189491]: 2025-12-01 09:52:33.393 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:52:35 compute-0 nova_compute[189491]: 2025-12-01 09:52:35.199 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:52:35 compute-0 podman[257631]: 2025-12-01 09:52:35.69550709 +0000 UTC m=+0.068856363 container health_status 6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  1 09:52:35 compute-0 podman[257632]: 2025-12-01 09:52:35.721437653 +0000 UTC m=+0.094021326 container health_status ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec  1 09:52:38 compute-0 nova_compute[189491]: 2025-12-01 09:52:38.395 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:52:39 compute-0 nova_compute[189491]: 2025-12-01 09:52:39.714 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:52:39 compute-0 nova_compute[189491]: 2025-12-01 09:52:39.715 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  1 09:52:40 compute-0 nova_compute[189491]: 2025-12-01 09:52:40.204 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:52:43 compute-0 nova_compute[189491]: 2025-12-01 09:52:43.398 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:52:45 compute-0 nova_compute[189491]: 2025-12-01 09:52:45.209 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:52:45 compute-0 podman[257673]: 2025-12-01 09:52:45.691257564 +0000 UTC m=+0.063631393 container health_status dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 09:52:45 compute-0 podman[257675]: 2025-12-01 09:52:45.71569604 +0000 UTC m=+0.077078551 container health_status f2d8639519b361376780abb83cb5a5e341c4f412720fb5852b2a4d261a69c359 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., version=9.4, name=ubi9, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.buildah.version=1.29.0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-container, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release=1214.1726694543, release-0.7.12=, build-date=2024-09-18T21:23:30)
Dec  1 09:52:45 compute-0 nova_compute[189491]: 2025-12-01 09:52:45.730 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:52:45 compute-0 nova_compute[189491]: 2025-12-01 09:52:45.730 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 09:52:45 compute-0 nova_compute[189491]: 2025-12-01 09:52:45.730 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 09:52:45 compute-0 podman[257674]: 2025-12-01 09:52:45.736860466 +0000 UTC m=+0.102268725 container health_status e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  1 09:52:46 compute-0 nova_compute[189491]: 2025-12-01 09:52:46.275 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "refresh_cache-dc0d510c-4baf-4bcb-ab4f-de6ee48849c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 09:52:46 compute-0 nova_compute[189491]: 2025-12-01 09:52:46.276 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquired lock "refresh_cache-dc0d510c-4baf-4bcb-ab4f-de6ee48849c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 09:52:46 compute-0 nova_compute[189491]: 2025-12-01 09:52:46.277 189495 DEBUG nova.network.neutron [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] [instance: dc0d510c-4baf-4bcb-ab4f-de6ee48849c0] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  1 09:52:46 compute-0 nova_compute[189491]: 2025-12-01 09:52:46.277 189495 DEBUG nova.objects.instance [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lazy-loading 'info_cache' on Instance uuid dc0d510c-4baf-4bcb-ab4f-de6ee48849c0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 09:52:47 compute-0 nova_compute[189491]: 2025-12-01 09:52:47.680 189495 DEBUG nova.network.neutron [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] [instance: dc0d510c-4baf-4bcb-ab4f-de6ee48849c0] Updating instance_info_cache with network_info: [{"id": "e1536dee-e9fa-499f-9e7a-2b2a0ecce586", "address": "fa:16:3e:50:a8:e2", "network": {"id": "cf0577af-a5ed-496f-aa24-ae4d86898e85", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.156", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d5294cc5ac64b22a4a0f770b8d8bc61", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1536dee-e9", "ovs_interfaceid": "e1536dee-e9fa-499f-9e7a-2b2a0ecce586", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 09:52:47 compute-0 nova_compute[189491]: 2025-12-01 09:52:47.699 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Releasing lock "refresh_cache-dc0d510c-4baf-4bcb-ab4f-de6ee48849c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 09:52:47 compute-0 nova_compute[189491]: 2025-12-01 09:52:47.700 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] [instance: dc0d510c-4baf-4bcb-ab4f-de6ee48849c0] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  1 09:52:48 compute-0 nova_compute[189491]: 2025-12-01 09:52:48.400 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:52:50 compute-0 nova_compute[189491]: 2025-12-01 09:52:50.213 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:52:52 compute-0 nova_compute[189491]: 2025-12-01 09:52:52.713 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:52:52 compute-0 podman[257732]: 2025-12-01 09:52:52.737623383 +0000 UTC m=+0.092539147 container health_status 110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, release=1755695350, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, container_name=openstack_network_exporter, version=9.6, architecture=x86_64, config_id=edpm, distribution-scope=public)
Dec  1 09:52:52 compute-0 podman[257733]: 2025-12-01 09:52:52.747178387 +0000 UTC m=+0.099206351 container health_status f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec  1 09:52:52 compute-0 nova_compute[189491]: 2025-12-01 09:52:52.749 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:52:52 compute-0 nova_compute[189491]: 2025-12-01 09:52:52.750 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:52:52 compute-0 nova_compute[189491]: 2025-12-01 09:52:52.750 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:52:52 compute-0 nova_compute[189491]: 2025-12-01 09:52:52.750 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 09:52:52 compute-0 nova_compute[189491]: 2025-12-01 09:52:52.854 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:52:52 compute-0 nova_compute[189491]: 2025-12-01 09:52:52.944 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:52:52 compute-0 nova_compute[189491]: 2025-12-01 09:52:52.948 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:52:53 compute-0 nova_compute[189491]: 2025-12-01 09:52:53.016 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:52:53 compute-0 nova_compute[189491]: 2025-12-01 09:52:53.030 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:52:53 compute-0 nova_compute[189491]: 2025-12-01 09:52:53.095 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:52:53 compute-0 nova_compute[189491]: 2025-12-01 09:52:53.097 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:52:53 compute-0 nova_compute[189491]: 2025-12-01 09:52:53.159 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:52:53 compute-0 nova_compute[189491]: 2025-12-01 09:52:53.404 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:52:53 compute-0 nova_compute[189491]: 2025-12-01 09:52:53.575 189495 WARNING nova.virt.libvirt.driver [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 09:52:53 compute-0 nova_compute[189491]: 2025-12-01 09:52:53.577 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4887MB free_disk=72.24847412109375GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 09:52:53 compute-0 nova_compute[189491]: 2025-12-01 09:52:53.577 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:52:53 compute-0 nova_compute[189491]: 2025-12-01 09:52:53.578 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:52:53 compute-0 nova_compute[189491]: 2025-12-01 09:52:53.662 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Instance dc0d510c-4baf-4bcb-ab4f-de6ee48849c0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 09:52:53 compute-0 nova_compute[189491]: 2025-12-01 09:52:53.663 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Instance be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 09:52:53 compute-0 nova_compute[189491]: 2025-12-01 09:52:53.663 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 09:52:53 compute-0 nova_compute[189491]: 2025-12-01 09:52:53.664 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 09:52:53 compute-0 nova_compute[189491]: 2025-12-01 09:52:53.683 189495 DEBUG nova.scheduler.client.report [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Refreshing inventories for resource provider 143c7fe7-af1f-477a-978c-6a994d785d98 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  1 09:52:53 compute-0 nova_compute[189491]: 2025-12-01 09:52:53.706 189495 DEBUG nova.scheduler.client.report [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Updating ProviderTree inventory for provider 143c7fe7-af1f-477a-978c-6a994d785d98 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  1 09:52:53 compute-0 nova_compute[189491]: 2025-12-01 09:52:53.707 189495 DEBUG nova.compute.provider_tree [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Updating inventory in ProviderTree for provider 143c7fe7-af1f-477a-978c-6a994d785d98 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  1 09:52:53 compute-0 nova_compute[189491]: 2025-12-01 09:52:53.720 189495 DEBUG nova.scheduler.client.report [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Refreshing aggregate associations for resource provider 143c7fe7-af1f-477a-978c-6a994d785d98, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  1 09:52:53 compute-0 nova_compute[189491]: 2025-12-01 09:52:53.743 189495 DEBUG nova.scheduler.client.report [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Refreshing trait associations for resource provider 143c7fe7-af1f-477a-978c-6a994d785d98, traits: COMPUTE_STORAGE_BUS_FDC,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_FMA3,HW_CPU_X86_SVM,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_AVX,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SHA,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_AVX2,HW_CPU_X86_ABM,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_USB,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_MMX,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE2,COMPUTE_ACCELERATORS,HW_CPU_X86_F16C,HW_CPU_X86_SSE42,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE41,HW_CPU_X86_BMI2,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE4A,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_CIRRUS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  1 09:52:53 compute-0 nova_compute[189491]: 2025-12-01 09:52:53.803 189495 DEBUG nova.compute.provider_tree [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Inventory has not changed in ProviderTree for provider: 143c7fe7-af1f-477a-978c-6a994d785d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 09:52:53 compute-0 nova_compute[189491]: 2025-12-01 09:52:53.827 189495 DEBUG nova.scheduler.client.report [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Inventory has not changed for provider 143c7fe7-af1f-477a-978c-6a994d785d98 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 09:52:53 compute-0 nova_compute[189491]: 2025-12-01 09:52:53.828 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 09:52:53 compute-0 nova_compute[189491]: 2025-12-01 09:52:53.828 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.251s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:52:55 compute-0 nova_compute[189491]: 2025-12-01 09:52:55.218 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:52:55 compute-0 nova_compute[189491]: 2025-12-01 09:52:55.829 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:52:55 compute-0 nova_compute[189491]: 2025-12-01 09:52:55.830 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:52:56 compute-0 podman[257784]: 2025-12-01 09:52:56.705269328 +0000 UTC m=+0.083705822 container health_status 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 09:52:56 compute-0 nova_compute[189491]: 2025-12-01 09:52:56.714 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:52:56 compute-0 nova_compute[189491]: 2025-12-01 09:52:56.717 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:52:56 compute-0 nova_compute[189491]: 2025-12-01 09:52:56.717 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec  1 09:52:56 compute-0 podman[257785]: 2025-12-01 09:52:56.751164888 +0000 UTC m=+0.122309705 container health_status 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller)
Dec  1 09:52:56 compute-0 nova_compute[189491]: 2025-12-01 09:52:56.767 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec  1 09:52:57 compute-0 nova_compute[189491]: 2025-12-01 09:52:57.765 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:52:57 compute-0 nova_compute[189491]: 2025-12-01 09:52:57.766 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 09:52:58 compute-0 nova_compute[189491]: 2025-12-01 09:52:58.405 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:52:59 compute-0 nova_compute[189491]: 2025-12-01 09:52:59.716 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:52:59 compute-0 podman[203700]: time="2025-12-01T09:52:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 09:52:59 compute-0 podman[203700]: @ - - [01/Dec/2025:09:52:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Dec  1 09:52:59 compute-0 podman[203700]: @ - - [01/Dec/2025:09:52:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4813 "" "Go-http-client/1.1"
Dec  1 09:53:00 compute-0 nova_compute[189491]: 2025-12-01 09:53:00.223 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:53:01 compute-0 openstack_network_exporter[205866]: ERROR   09:53:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 09:53:01 compute-0 openstack_network_exporter[205866]: ERROR   09:53:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:53:01 compute-0 openstack_network_exporter[205866]: ERROR   09:53:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:53:01 compute-0 openstack_network_exporter[205866]: ERROR   09:53:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 09:53:01 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:53:01 compute-0 openstack_network_exporter[205866]: ERROR   09:53:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 09:53:01 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:53:03 compute-0 nova_compute[189491]: 2025-12-01 09:53:03.407 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:53:03 compute-0 nova_compute[189491]: 2025-12-01 09:53:03.716 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:53:04 compute-0 nova_compute[189491]: 2025-12-01 09:53:04.713 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:53:04 compute-0 nova_compute[189491]: 2025-12-01 09:53:04.714 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:53:05 compute-0 nova_compute[189491]: 2025-12-01 09:53:05.228 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:53:06 compute-0 podman[257828]: 2025-12-01 09:53:06.71866863 +0000 UTC m=+0.093064631 container health_status 6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  1 09:53:06 compute-0 podman[257829]: 2025-12-01 09:53:06.73427641 +0000 UTC m=+0.103702820 container health_status ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=edpm, io.buildah.version=1.41.4, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec  1 09:53:08 compute-0 nova_compute[189491]: 2025-12-01 09:53:08.409 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:53:10 compute-0 nova_compute[189491]: 2025-12-01 09:53:10.232 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:53:13 compute-0 nova_compute[189491]: 2025-12-01 09:53:13.411 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:53:15 compute-0 nova_compute[189491]: 2025-12-01 09:53:15.235 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:53:16 compute-0 podman[257867]: 2025-12-01 09:53:16.698385398 +0000 UTC m=+0.064487103 container health_status dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 09:53:16 compute-0 podman[257868]: 2025-12-01 09:53:16.706878836 +0000 UTC m=+0.069058595 container health_status e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible)
Dec  1 09:53:16 compute-0 podman[257869]: 2025-12-01 09:53:16.742828643 +0000 UTC m=+0.097380767 container health_status f2d8639519b361376780abb83cb5a5e341c4f412720fb5852b2a4d261a69c359 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, maintainer=Red Hat, Inc., name=ubi9, vcs-type=git, architecture=x86_64, io.openshift.expose-services=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, io.openshift.tags=base rhel9, release=1214.1726694543, version=9.4, distribution-scope=public, container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=)
Dec  1 09:53:18 compute-0 nova_compute[189491]: 2025-12-01 09:53:18.415 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:53:20 compute-0 nova_compute[189491]: 2025-12-01 09:53:20.238 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:53:23 compute-0 nova_compute[189491]: 2025-12-01 09:53:23.416 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:53:23 compute-0 podman[257931]: 2025-12-01 09:53:23.696316797 +0000 UTC m=+0.068280786 container health_status 110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, architecture=x86_64, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, vendor=Red Hat, Inc., release=1755695350)
Dec  1 09:53:23 compute-0 podman[257932]: 2025-12-01 09:53:23.715213678 +0000 UTC m=+0.085365783 container health_status f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 09:53:25 compute-0 nova_compute[189491]: 2025-12-01 09:53:25.242 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:53:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:53:26.546 106659 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:53:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:53:26.546 106659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:53:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:53:26.547 106659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:53:27 compute-0 podman[257969]: 2025-12-01 09:53:27.710511507 +0000 UTC m=+0.077842699 container health_status 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 09:53:27 compute-0 podman[257970]: 2025-12-01 09:53:27.750114594 +0000 UTC m=+0.113199842 container health_status 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 09:53:28 compute-0 nova_compute[189491]: 2025-12-01 09:53:28.417 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:53:29 compute-0 podman[203700]: time="2025-12-01T09:53:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 09:53:29 compute-0 podman[203700]: @ - - [01/Dec/2025:09:53:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Dec  1 09:53:29 compute-0 podman[203700]: @ - - [01/Dec/2025:09:53:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4809 "" "Go-http-client/1.1"
Dec  1 09:53:30 compute-0 nova_compute[189491]: 2025-12-01 09:53:30.246 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:53:31 compute-0 openstack_network_exporter[205866]: ERROR   09:53:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:53:31 compute-0 openstack_network_exporter[205866]: ERROR   09:53:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 09:53:31 compute-0 openstack_network_exporter[205866]: ERROR   09:53:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 09:53:31 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:53:31 compute-0 openstack_network_exporter[205866]: ERROR   09:53:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 09:53:31 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:53:31 compute-0 openstack_network_exporter[205866]: ERROR   09:53:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:53:33 compute-0 nova_compute[189491]: 2025-12-01 09:53:33.418 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:53:35 compute-0 nova_compute[189491]: 2025-12-01 09:53:35.249 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:53:37 compute-0 podman[258013]: 2025-12-01 09:53:37.697110435 +0000 UTC m=+0.073651988 container health_status 6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  1 09:53:37 compute-0 podman[258014]: 2025-12-01 09:53:37.70140606 +0000 UTC m=+0.073673768 container health_status ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec  1 09:53:38 compute-0 nova_compute[189491]: 2025-12-01 09:53:38.422 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:53:40 compute-0 nova_compute[189491]: 2025-12-01 09:53:40.252 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:53:43 compute-0 nova_compute[189491]: 2025-12-01 09:53:43.423 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:53:45 compute-0 nova_compute[189491]: 2025-12-01 09:53:45.255 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:53:45 compute-0 nova_compute[189491]: 2025-12-01 09:53:45.879 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:53:45 compute-0 nova_compute[189491]: 2025-12-01 09:53:45.881 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 09:53:46 compute-0 nova_compute[189491]: 2025-12-01 09:53:46.286 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "refresh_cache-be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 09:53:46 compute-0 nova_compute[189491]: 2025-12-01 09:53:46.287 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquired lock "refresh_cache-be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 09:53:46 compute-0 nova_compute[189491]: 2025-12-01 09:53:46.288 189495 DEBUG nova.network.neutron [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] [instance: be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  1 09:53:47 compute-0 nova_compute[189491]: 2025-12-01 09:53:47.446 189495 DEBUG nova.network.neutron [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] [instance: be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2] Updating instance_info_cache with network_info: [{"id": "01cbdc1d-a86f-411f-a8e1-8a4166f063d3", "address": "fa:16:3e:37:35:95", "network": {"id": "cf0577af-a5ed-496f-aa24-ae4d86898e85", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.35", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d5294cc5ac64b22a4a0f770b8d8bc61", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01cbdc1d-a8", "ovs_interfaceid": "01cbdc1d-a86f-411f-a8e1-8a4166f063d3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 09:53:47 compute-0 nova_compute[189491]: 2025-12-01 09:53:47.467 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Releasing lock "refresh_cache-be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 09:53:47 compute-0 nova_compute[189491]: 2025-12-01 09:53:47.468 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] [instance: be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  1 09:53:47 compute-0 podman[258056]: 2025-12-01 09:53:47.715617681 +0000 UTC m=+0.089766461 container health_status dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  1 09:53:47 compute-0 podman[258057]: 2025-12-01 09:53:47.722630352 +0000 UTC m=+0.096326751 container health_status e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=edpm, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125)
Dec  1 09:53:47 compute-0 podman[258058]: 2025-12-01 09:53:47.733381934 +0000 UTC m=+0.101680571 container health_status f2d8639519b361376780abb83cb5a5e341c4f412720fb5852b2a4d261a69c359 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, io.buildah.version=1.29.0, release-0.7.12=, io.openshift.tags=base rhel9, vcs-type=git, maintainer=Red Hat, Inc., managed_by=edpm_ansible, com.redhat.component=ubi9-container, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., name=ubi9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64)
Dec  1 09:53:48 compute-0 nova_compute[189491]: 2025-12-01 09:53:48.426 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:53:50 compute-0 nova_compute[189491]: 2025-12-01 09:53:50.260 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:53:53 compute-0 nova_compute[189491]: 2025-12-01 09:53:53.427 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:53:54 compute-0 podman[258120]: 2025-12-01 09:53:54.711468249 +0000 UTC m=+0.088907920 container health_status 110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., name=ubi9-minimal, config_id=edpm, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, io.openshift.expose-services=, architecture=x86_64, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6)
Dec  1 09:53:54 compute-0 nova_compute[189491]: 2025-12-01 09:53:54.713 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:53:54 compute-0 podman[258121]: 2025-12-01 09:53:54.715824385 +0000 UTC m=+0.079478340 container health_status f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  1 09:53:54 compute-0 nova_compute[189491]: 2025-12-01 09:53:54.764 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:53:54 compute-0 nova_compute[189491]: 2025-12-01 09:53:54.765 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:53:54 compute-0 nova_compute[189491]: 2025-12-01 09:53:54.765 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:53:54 compute-0 nova_compute[189491]: 2025-12-01 09:53:54.766 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 09:53:54 compute-0 nova_compute[189491]: 2025-12-01 09:53:54.849 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:53:54 compute-0 nova_compute[189491]: 2025-12-01 09:53:54.945 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:53:54 compute-0 nova_compute[189491]: 2025-12-01 09:53:54.947 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:53:55 compute-0 nova_compute[189491]: 2025-12-01 09:53:55.046 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:53:55 compute-0 nova_compute[189491]: 2025-12-01 09:53:55.056 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:53:55 compute-0 nova_compute[189491]: 2025-12-01 09:53:55.148 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:53:55 compute-0 nova_compute[189491]: 2025-12-01 09:53:55.150 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:53:55 compute-0 nova_compute[189491]: 2025-12-01 09:53:55.215 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:53:55 compute-0 nova_compute[189491]: 2025-12-01 09:53:55.264 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:53:55 compute-0 nova_compute[189491]: 2025-12-01 09:53:55.553 189495 WARNING nova.virt.libvirt.driver [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 09:53:55 compute-0 nova_compute[189491]: 2025-12-01 09:53:55.554 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4879MB free_disk=72.24847412109375GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 09:53:55 compute-0 nova_compute[189491]: 2025-12-01 09:53:55.555 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:53:55 compute-0 nova_compute[189491]: 2025-12-01 09:53:55.555 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:53:55 compute-0 nova_compute[189491]: 2025-12-01 09:53:55.702 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Instance dc0d510c-4baf-4bcb-ab4f-de6ee48849c0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 09:53:55 compute-0 nova_compute[189491]: 2025-12-01 09:53:55.703 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Instance be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 09:53:55 compute-0 nova_compute[189491]: 2025-12-01 09:53:55.703 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 09:53:55 compute-0 nova_compute[189491]: 2025-12-01 09:53:55.704 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 09:53:55 compute-0 nova_compute[189491]: 2025-12-01 09:53:55.851 189495 DEBUG nova.compute.provider_tree [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Inventory has not changed in ProviderTree for provider: 143c7fe7-af1f-477a-978c-6a994d785d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 09:53:55 compute-0 nova_compute[189491]: 2025-12-01 09:53:55.866 189495 DEBUG nova.scheduler.client.report [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Inventory has not changed for provider 143c7fe7-af1f-477a-978c-6a994d785d98 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 09:53:55 compute-0 nova_compute[189491]: 2025-12-01 09:53:55.867 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 09:53:55 compute-0 nova_compute[189491]: 2025-12-01 09:53:55.868 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.312s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:53:56 compute-0 nova_compute[189491]: 2025-12-01 09:53:56.869 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:53:56 compute-0 nova_compute[189491]: 2025-12-01 09:53:56.869 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:53:58 compute-0 nova_compute[189491]: 2025-12-01 09:53:58.430 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:53:58 compute-0 nova_compute[189491]: 2025-12-01 09:53:58.713 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:53:58 compute-0 nova_compute[189491]: 2025-12-01 09:53:58.713 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:53:58 compute-0 nova_compute[189491]: 2025-12-01 09:53:58.714 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 09:53:58 compute-0 podman[258172]: 2025-12-01 09:53:58.724559942 +0000 UTC m=+0.098458672 container health_status 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Dec  1 09:53:58 compute-0 podman[258173]: 2025-12-01 09:53:58.759089755 +0000 UTC m=+0.131526489 container health_status 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible)
Dec  1 09:53:59 compute-0 podman[203700]: time="2025-12-01T09:53:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 09:53:59 compute-0 podman[203700]: @ - - [01/Dec/2025:09:53:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Dec  1 09:53:59 compute-0 podman[203700]: @ - - [01/Dec/2025:09:53:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4812 "" "Go-http-client/1.1"
Dec  1 09:54:00 compute-0 nova_compute[189491]: 2025-12-01 09:54:00.269 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:54:00 compute-0 nova_compute[189491]: 2025-12-01 09:54:00.714 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:54:01 compute-0 openstack_network_exporter[205866]: ERROR   09:54:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:54:01 compute-0 openstack_network_exporter[205866]: ERROR   09:54:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:54:01 compute-0 openstack_network_exporter[205866]: ERROR   09:54:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 09:54:01 compute-0 openstack_network_exporter[205866]: ERROR   09:54:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 09:54:01 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:54:01 compute-0 openstack_network_exporter[205866]: ERROR   09:54:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 09:54:01 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:54:03 compute-0 nova_compute[189491]: 2025-12-01 09:54:03.432 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:54:05 compute-0 nova_compute[189491]: 2025-12-01 09:54:05.273 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:54:05 compute-0 nova_compute[189491]: 2025-12-01 09:54:05.714 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:54:05 compute-0 nova_compute[189491]: 2025-12-01 09:54:05.715 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:54:08 compute-0 nova_compute[189491]: 2025-12-01 09:54:08.436 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:54:08 compute-0 nova_compute[189491]: 2025-12-01 09:54:08.708 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:54:08 compute-0 podman[258215]: 2025-12-01 09:54:08.724407233 +0000 UTC m=+0.087701741 container health_status 6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  1 09:54:08 compute-0 podman[258216]: 2025-12-01 09:54:08.747779833 +0000 UTC m=+0.101668211 container health_status ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible)
Dec  1 09:54:10 compute-0 nova_compute[189491]: 2025-12-01 09:54:10.277 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:54:13 compute-0 nova_compute[189491]: 2025-12-01 09:54:13.440 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:54:15 compute-0 nova_compute[189491]: 2025-12-01 09:54:15.282 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:54:18 compute-0 nova_compute[189491]: 2025-12-01 09:54:18.441 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:54:18 compute-0 podman[258256]: 2025-12-01 09:54:18.712431636 +0000 UTC m=+0.078278391 container health_status dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 09:54:18 compute-0 podman[258257]: 2025-12-01 09:54:18.717373976 +0000 UTC m=+0.078926806 container health_status e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  1 09:54:18 compute-0 podman[258258]: 2025-12-01 09:54:18.739507436 +0000 UTC m=+0.094004374 container health_status f2d8639519b361376780abb83cb5a5e341c4f412720fb5852b2a4d261a69c359 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., version=9.4, release-0.7.12=, io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, io.openshift.expose-services=, name=ubi9, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, managed_by=edpm_ansible)
Dec  1 09:54:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:19.796 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  1 09:54:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:19.797 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  1 09:54:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:19.797 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b020>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84fb376e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:54:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:19.798 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7ff84c98b0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:54:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:19.798 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84dc55100>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84fb376e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:54:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:19.799 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84fb376e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:54:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:19.799 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84fb376e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:54:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:19.799 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84fb376e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:54:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:19.800 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84ca1c260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84fb376e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:54:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:19.800 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84fb376e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:54:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:19.800 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84fb376e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:54:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:19.800 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84ff01af0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84fb376e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:54:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:19.801 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bb30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84fb376e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:54:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:19.801 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84fb376e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:54:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:19.801 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bb60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84fb376e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:54:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:19.801 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84fb376e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:54:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:19.802 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bbc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84fb376e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:54:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:19.802 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84fb376e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:54:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:19.802 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84fb376e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:54:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:19.803 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84fb376e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:54:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:19.803 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84fb376e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:54:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:19.803 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b5f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84fb376e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:54:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:19.803 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84fb376e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:54:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:19.804 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98be60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84fb376e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:54:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:19.804 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84f216690>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84fb376e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:54:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:19.804 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c9896d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84fb376e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:54:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:19.805 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84fb376e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:54:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:19.805 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98a720>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84fb376e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:54:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:19.805 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84fb376e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:54:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:19.803 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2', 'name': 'te-8664732-asg-zzzrimsgcaeu-wsvolr2mhgm2-s6bg7htmycz5', 'flavor': {'id': '422f041c-a187-4aa2-8167-37f3eb0e89c2', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '280f4e4d-4a12-4164-a687-6106a9afc7fe'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000f', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '6d5294cc5ac64b22a4a0f770b8d8bc61', 'user_id': 'c54f3a4a232b4a739be88e97f2094d4f', 'hostId': 'b9c6fdac1e98b24aca6852a4c44644f8d936ac2e3843f1f4b4c15406', 'status': 'active', 'metadata': {'metering.server_group': 'e03937ad-4d2d-4edc-9b33-ed8d878566ca'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  1 09:54:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:19.809 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'dc0d510c-4baf-4bcb-ab4f-de6ee48849c0', 'name': 'te-8664732-asg-zzzrimsgcaeu-gnecnnuukmep-lujrpewlzjs2', 'flavor': {'id': '422f041c-a187-4aa2-8167-37f3eb0e89c2', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '280f4e4d-4a12-4164-a687-6106a9afc7fe'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000b', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '6d5294cc5ac64b22a4a0f770b8d8bc61', 'user_id': 'c54f3a4a232b4a739be88e97f2094d4f', 'hostId': 'b9c6fdac1e98b24aca6852a4c44644f8d936ac2e3843f1f4b4c15406', 'status': 'active', 'metadata': {'metering.server_group': 'e03937ad-4d2d-4edc-9b33-ed8d878566ca'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  1 09:54:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:19.810 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  1 09:54:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:19.810 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b020>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:54:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:19.810 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b020>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:54:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:19.811 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:54:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:19.812 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-01T09:54:19.810904) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:54:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:19.854 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk.device.read.bytes volume: 30145536 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:54:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:19.855 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:54:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:19.903 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.device.read.bytes volume: 31078912 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:54:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:19.903 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:54:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:19.904 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  1 09:54:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:19.904 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7ff8501e1d00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:54:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:19.904 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  1 09:54:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:19.904 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84dc55100>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:54:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:19.904 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84dc55100>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:54:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:19.904 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:54:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:19.905 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-01T09:54:19.904816) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:54:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:19.920 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk.device.allocation volume: 30154752 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:54:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:19.921 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:54:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:19.937 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.device.allocation volume: 30154752 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:54:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:19.937 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:54:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:19.938 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  1 09:54:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:19.938 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7ff84c98b110>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:54:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:19.938 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  1 09:54:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:19.938 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b140>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:54:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:19.938 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b140>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:54:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:19.938 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:54:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:19.938 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk.device.read.latency volume: 537383683 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:54:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:19.939 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk.device.read.latency volume: 120965921 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:54:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:19.939 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.device.read.latency volume: 558901098 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:54:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:19.939 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.device.read.latency volume: 60948895 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:54:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:19.939 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-01T09:54:19.938677) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:54:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:19.940 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  1 09:54:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:19.940 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7ff84c98b1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:54:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:19.940 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  1 09:54:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:19.940 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:54:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:19.940 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:54:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:19.940 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:54:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:19.940 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk.device.usage volume: 29884416 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:54:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:19.941 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-01T09:54:19.940659) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:54:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:19.941 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:54:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:19.941 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.device.usage volume: 30081024 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:54:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:19.941 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:54:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:19.941 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  1 09:54:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:19.942 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7ff84c98b200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:54:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:19.942 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  1 09:54:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:19.942 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:54:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:19.942 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:54:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:19.942 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:54:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:19.942 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-01T09:54:19.942382) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:54:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:19.942 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk.device.write.bytes volume: 72884224 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:54:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:19.942 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:54:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:19.943 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.device.write.bytes volume: 73191424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:54:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:19.943 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:54:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:19.943 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  1 09:54:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:19.943 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7ff84ca1c230>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:54:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:19.943 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  1 09:54:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:19.944 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84ca1c260>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:54:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:19.944 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84ca1c260>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:54:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:19.944 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:54:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:19.944 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-01T09:54:19.944231) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:54:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:19.969 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:54:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:19.996 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:54:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:19.996 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  1 09:54:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:19.996 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7ff84c98b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:54:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:19.996 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  1 09:54:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:19.997 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:54:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:19.997 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:54:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:19.997 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:54:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:19.997 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk.device.write.latency volume: 2420440038 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:54:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:19.997 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:54:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:19.998 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.device.write.latency volume: 3075326058 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:54:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:19.998 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:54:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:19.997 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-01T09:54:19.997224) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:54:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:19.998 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  1 09:54:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:19.998 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7ff84c98b2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:19.998 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:19.999 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:19.999 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:19.999 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:19.999 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk.device.write.requests volume: 310 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:19.999 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-01T09:54:19.999208) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:19.999 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:19.999 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.device.write.requests volume: 337 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.000 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.000 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.000 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7ff84c98b620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.000 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.000 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84ff01af0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.000 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84ff01af0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.001 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.001 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-01T09:54:20.001030) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.005 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/network.incoming.bytes volume: 1976 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.009 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/network.incoming.bytes volume: 1520 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.010 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.010 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7ff84c98b680>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.010 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.010 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7ff84c98b320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.010 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.011 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.011 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.011 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-01T09:54:20.011185) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.011 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.012 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.012 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7ff84c98b920>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.012 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.012 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bb60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.013 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bb60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.013 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-01T09:54:20.013201) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.013 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.013 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.014 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/network.incoming.packets volume: 13 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.014 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.014 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7ff84c98b380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.014 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.014 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b3b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.014 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b3b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.015 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-01T09:54:20.014865) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.015 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.015 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.015 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7ff84c98bb90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.016 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.016 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bbc0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.016 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bbc0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.016 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-01T09:54:20.016470) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.016 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.017 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.017 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.017 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.017 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7ff84c98bbf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.017 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.018 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bc20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.018 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bc20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.018 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-01T09:54:20.018263) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.018 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.018 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.019 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.019 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.019 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7ff84c98bc80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.019 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.019 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bcb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.019 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bcb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.021 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-01T09:54:20.019955) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.020 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.021 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.021 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.022 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.022 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7ff84c98bd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.022 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.022 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bd40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.022 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bd40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.022 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-01T09:54:20.022478) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.022 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.023 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.023 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.023 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.023 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7ff84c98bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.023 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.023 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7ff84c98b5c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.024 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.024 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b5f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.024 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b5f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.025 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-01T09:54:20.024301) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.024 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.025 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/memory.usage volume: 43.65234375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.026 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/memory.usage volume: 42.47265625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.026 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.026 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7ff84dc55040>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.027 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.027 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b650>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.027 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b650>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.027 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-01T09:54:20.027371) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.027 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.028 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.028 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.028 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.028 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7ff84c98be30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.028 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.029 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98be60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.029 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98be60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.029 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-01T09:54:20.029329) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.029 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.030 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.030 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.030 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.031 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7ff8503b1b80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.031 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.031 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84f216690>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.031 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84f216690>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.032 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-01T09:54:20.031512) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.031 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.032 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/cpu volume: 250600000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.032 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/cpu volume: 337100000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.033 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.033 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7ff84dab3f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.033 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.033 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c9896d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.033 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c9896d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.034 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-01T09:54:20.033641) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.033 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.034 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.034 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.035 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.035 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.035 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.036 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7ff84c98bec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.036 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.036 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bef0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.036 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bef0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.036 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-01T09:54:20.036494) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.036 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.037 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.037 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.037 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.038 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7ff84c98b170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.038 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.038 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98a720>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.038 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98a720>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.038 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-01T09:54:20.038330) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.038 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.039 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk.device.read.requests volume: 1092 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.039 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.039 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.device.read.requests volume: 1138 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.039 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.040 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.040 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7ff84c98bf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.040 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.040 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bf80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.040 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bf80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.041 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-01T09:54:20.040593) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.040 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.041 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.041 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.041 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.042 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.042 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.042 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.042 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.042 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.042 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.042 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.042 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.043 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.043 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.043 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.043 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.043 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.043 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.043 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.043 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.043 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.043 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.043 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.043 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.043 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.043 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.043 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.044 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.044 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:54:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:54:20.044 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:54:20 compute-0 nova_compute[189491]: 2025-12-01 09:54:20.286 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:54:23 compute-0 nova_compute[189491]: 2025-12-01 09:54:23.447 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:54:25 compute-0 nova_compute[189491]: 2025-12-01 09:54:25.290 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:54:25 compute-0 podman[258320]: 2025-12-01 09:54:25.718683076 +0000 UTC m=+0.087885953 container health_status 110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, distribution-scope=public, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, io.openshift.expose-services=, managed_by=edpm_ansible, vcs-type=git, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, name=ubi9-minimal, release=1755695350)
Dec  1 09:54:25 compute-0 podman[258321]: 2025-12-01 09:54:25.733538939 +0000 UTC m=+0.097836827 container health_status f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec  1 09:54:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:54:26.547 106659 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:54:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:54:26.547 106659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:54:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:54:26.548 106659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:54:28 compute-0 nova_compute[189491]: 2025-12-01 09:54:28.447 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:54:29 compute-0 podman[258356]: 2025-12-01 09:54:29.72851639 +0000 UTC m=+0.101571178 container health_status 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, managed_by=edpm_ansible, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125)
Dec  1 09:54:29 compute-0 podman[258357]: 2025-12-01 09:54:29.735251544 +0000 UTC m=+0.101381253 container health_status 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec  1 09:54:29 compute-0 podman[203700]: time="2025-12-01T09:54:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 09:54:29 compute-0 podman[203700]: @ - - [01/Dec/2025:09:54:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Dec  1 09:54:29 compute-0 podman[203700]: @ - - [01/Dec/2025:09:54:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4814 "" "Go-http-client/1.1"
Dec  1 09:54:30 compute-0 nova_compute[189491]: 2025-12-01 09:54:30.294 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:54:31 compute-0 openstack_network_exporter[205866]: ERROR   09:54:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:54:31 compute-0 openstack_network_exporter[205866]: ERROR   09:54:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:54:31 compute-0 openstack_network_exporter[205866]: ERROR   09:54:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 09:54:31 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:54:31 compute-0 openstack_network_exporter[205866]: ERROR   09:54:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 09:54:31 compute-0 openstack_network_exporter[205866]: ERROR   09:54:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 09:54:31 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:54:33 compute-0 nova_compute[189491]: 2025-12-01 09:54:33.449 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:54:35 compute-0 nova_compute[189491]: 2025-12-01 09:54:35.298 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:54:38 compute-0 nova_compute[189491]: 2025-12-01 09:54:38.451 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:54:39 compute-0 podman[258398]: 2025-12-01 09:54:39.696238987 +0000 UTC m=+0.071199257 container health_status 6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  1 09:54:39 compute-0 podman[258399]: 2025-12-01 09:54:39.701423894 +0000 UTC m=+0.069549178 container health_status ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  1 09:54:40 compute-0 nova_compute[189491]: 2025-12-01 09:54:40.302 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:54:43 compute-0 nova_compute[189491]: 2025-12-01 09:54:43.455 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:54:45 compute-0 nova_compute[189491]: 2025-12-01 09:54:45.306 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:54:45 compute-0 nova_compute[189491]: 2025-12-01 09:54:45.714 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:54:45 compute-0 nova_compute[189491]: 2025-12-01 09:54:45.715 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 09:54:45 compute-0 nova_compute[189491]: 2025-12-01 09:54:45.715 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 09:54:46 compute-0 nova_compute[189491]: 2025-12-01 09:54:46.318 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "refresh_cache-dc0d510c-4baf-4bcb-ab4f-de6ee48849c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 09:54:46 compute-0 nova_compute[189491]: 2025-12-01 09:54:46.319 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquired lock "refresh_cache-dc0d510c-4baf-4bcb-ab4f-de6ee48849c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 09:54:46 compute-0 nova_compute[189491]: 2025-12-01 09:54:46.319 189495 DEBUG nova.network.neutron [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] [instance: dc0d510c-4baf-4bcb-ab4f-de6ee48849c0] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  1 09:54:46 compute-0 nova_compute[189491]: 2025-12-01 09:54:46.319 189495 DEBUG nova.objects.instance [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lazy-loading 'info_cache' on Instance uuid dc0d510c-4baf-4bcb-ab4f-de6ee48849c0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 09:54:47 compute-0 nova_compute[189491]: 2025-12-01 09:54:47.692 189495 DEBUG nova.network.neutron [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] [instance: dc0d510c-4baf-4bcb-ab4f-de6ee48849c0] Updating instance_info_cache with network_info: [{"id": "e1536dee-e9fa-499f-9e7a-2b2a0ecce586", "address": "fa:16:3e:50:a8:e2", "network": {"id": "cf0577af-a5ed-496f-aa24-ae4d86898e85", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.156", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d5294cc5ac64b22a4a0f770b8d8bc61", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1536dee-e9", "ovs_interfaceid": "e1536dee-e9fa-499f-9e7a-2b2a0ecce586", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 09:54:47 compute-0 nova_compute[189491]: 2025-12-01 09:54:47.710 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Releasing lock "refresh_cache-dc0d510c-4baf-4bcb-ab4f-de6ee48849c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 09:54:47 compute-0 nova_compute[189491]: 2025-12-01 09:54:47.711 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] [instance: dc0d510c-4baf-4bcb-ab4f-de6ee48849c0] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  1 09:54:48 compute-0 nova_compute[189491]: 2025-12-01 09:54:48.456 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:54:49 compute-0 podman[258439]: 2025-12-01 09:54:49.698783261 +0000 UTC m=+0.075631865 container health_status dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  1 09:54:49 compute-0 podman[258440]: 2025-12-01 09:54:49.712401714 +0000 UTC m=+0.083136558 container health_status e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi)
Dec  1 09:54:49 compute-0 podman[258441]: 2025-12-01 09:54:49.716913574 +0000 UTC m=+0.083102838 container health_status f2d8639519b361376780abb83cb5a5e341c4f412720fb5852b2a4d261a69c359 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., managed_by=edpm_ansible, name=ubi9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, version=9.4, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, distribution-scope=public, io.buildah.version=1.29.0, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, architecture=x86_64, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, container_name=kepler)
Dec  1 09:54:50 compute-0 nova_compute[189491]: 2025-12-01 09:54:50.311 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:54:53 compute-0 nova_compute[189491]: 2025-12-01 09:54:53.460 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:54:55 compute-0 nova_compute[189491]: 2025-12-01 09:54:55.314 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:54:55 compute-0 nova_compute[189491]: 2025-12-01 09:54:55.713 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:54:55 compute-0 nova_compute[189491]: 2025-12-01 09:54:55.742 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:54:55 compute-0 nova_compute[189491]: 2025-12-01 09:54:55.743 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:54:55 compute-0 nova_compute[189491]: 2025-12-01 09:54:55.744 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:54:55 compute-0 nova_compute[189491]: 2025-12-01 09:54:55.744 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 09:54:55 compute-0 nova_compute[189491]: 2025-12-01 09:54:55.831 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:54:55 compute-0 nova_compute[189491]: 2025-12-01 09:54:55.901 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:54:55 compute-0 nova_compute[189491]: 2025-12-01 09:54:55.902 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:54:55 compute-0 nova_compute[189491]: 2025-12-01 09:54:55.969 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:54:55 compute-0 nova_compute[189491]: 2025-12-01 09:54:55.979 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:54:56 compute-0 nova_compute[189491]: 2025-12-01 09:54:56.039 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:54:56 compute-0 nova_compute[189491]: 2025-12-01 09:54:56.040 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:54:56 compute-0 nova_compute[189491]: 2025-12-01 09:54:56.111 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:54:56 compute-0 nova_compute[189491]: 2025-12-01 09:54:56.435 189495 WARNING nova.virt.libvirt.driver [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 09:54:56 compute-0 nova_compute[189491]: 2025-12-01 09:54:56.437 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4887MB free_disk=72.24847412109375GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 09:54:56 compute-0 nova_compute[189491]: 2025-12-01 09:54:56.437 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:54:56 compute-0 nova_compute[189491]: 2025-12-01 09:54:56.438 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:54:56 compute-0 nova_compute[189491]: 2025-12-01 09:54:56.532 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Instance dc0d510c-4baf-4bcb-ab4f-de6ee48849c0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 09:54:56 compute-0 nova_compute[189491]: 2025-12-01 09:54:56.533 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Instance be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 09:54:56 compute-0 nova_compute[189491]: 2025-12-01 09:54:56.533 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 09:54:56 compute-0 nova_compute[189491]: 2025-12-01 09:54:56.534 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 09:54:56 compute-0 nova_compute[189491]: 2025-12-01 09:54:56.609 189495 DEBUG nova.compute.provider_tree [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Inventory has not changed in ProviderTree for provider: 143c7fe7-af1f-477a-978c-6a994d785d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 09:54:56 compute-0 nova_compute[189491]: 2025-12-01 09:54:56.634 189495 DEBUG nova.scheduler.client.report [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Inventory has not changed for provider 143c7fe7-af1f-477a-978c-6a994d785d98 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 09:54:56 compute-0 nova_compute[189491]: 2025-12-01 09:54:56.637 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 09:54:56 compute-0 nova_compute[189491]: 2025-12-01 09:54:56.637 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.199s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:54:56 compute-0 podman[258510]: 2025-12-01 09:54:56.702255465 +0000 UTC m=+0.062381312 container health_status f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, config_id=ovn_metadata_agent)
Dec  1 09:54:56 compute-0 podman[258509]: 2025-12-01 09:54:56.727239175 +0000 UTC m=+0.091027491 container health_status 110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, version=9.6, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, distribution-scope=public)
Dec  1 09:54:57 compute-0 nova_compute[189491]: 2025-12-01 09:54:57.639 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:54:57 compute-0 nova_compute[189491]: 2025-12-01 09:54:57.707 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:54:58 compute-0 nova_compute[189491]: 2025-12-01 09:54:58.462 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:54:58 compute-0 nova_compute[189491]: 2025-12-01 09:54:58.713 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:54:59 compute-0 podman[203700]: time="2025-12-01T09:54:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 09:54:59 compute-0 podman[203700]: @ - - [01/Dec/2025:09:54:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Dec  1 09:54:59 compute-0 podman[203700]: @ - - [01/Dec/2025:09:54:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4812 "" "Go-http-client/1.1"
Dec  1 09:55:00 compute-0 nova_compute[189491]: 2025-12-01 09:55:00.319 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:55:00 compute-0 nova_compute[189491]: 2025-12-01 09:55:00.713 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:55:00 compute-0 nova_compute[189491]: 2025-12-01 09:55:00.714 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 09:55:00 compute-0 podman[258545]: 2025-12-01 09:55:00.722161425 +0000 UTC m=+0.094099686 container health_status 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec  1 09:55:00 compute-0 podman[258546]: 2025-12-01 09:55:00.738400331 +0000 UTC m=+0.106859977 container health_status 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec  1 09:55:01 compute-0 openstack_network_exporter[205866]: ERROR   09:55:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:55:01 compute-0 openstack_network_exporter[205866]: ERROR   09:55:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:55:01 compute-0 openstack_network_exporter[205866]: ERROR   09:55:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 09:55:01 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:55:01 compute-0 openstack_network_exporter[205866]: ERROR   09:55:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 09:55:01 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:55:01 compute-0 openstack_network_exporter[205866]: ERROR   09:55:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 09:55:02 compute-0 nova_compute[189491]: 2025-12-01 09:55:02.714 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:55:03 compute-0 nova_compute[189491]: 2025-12-01 09:55:03.463 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:55:05 compute-0 nova_compute[189491]: 2025-12-01 09:55:05.324 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:55:06 compute-0 nova_compute[189491]: 2025-12-01 09:55:06.715 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:55:06 compute-0 nova_compute[189491]: 2025-12-01 09:55:06.716 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:55:08 compute-0 nova_compute[189491]: 2025-12-01 09:55:08.464 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:55:10 compute-0 nova_compute[189491]: 2025-12-01 09:55:10.328 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:55:10 compute-0 podman[258586]: 2025-12-01 09:55:10.728799491 +0000 UTC m=+0.096069323 container health_status 6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  1 09:55:10 compute-0 podman[258587]: 2025-12-01 09:55:10.750247645 +0000 UTC m=+0.098478814 container health_status ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125)
Dec  1 09:55:13 compute-0 nova_compute[189491]: 2025-12-01 09:55:13.468 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:55:15 compute-0 nova_compute[189491]: 2025-12-01 09:55:15.334 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:55:18 compute-0 nova_compute[189491]: 2025-12-01 09:55:18.471 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:55:20 compute-0 nova_compute[189491]: 2025-12-01 09:55:20.339 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:55:20 compute-0 podman[258626]: 2025-12-01 09:55:20.685807608 +0000 UTC m=+0.060228730 container health_status dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 09:55:20 compute-0 podman[258628]: 2025-12-01 09:55:20.705221431 +0000 UTC m=+0.068271266 container health_status f2d8639519b361376780abb83cb5a5e341c4f412720fb5852b2a4d261a69c359 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, container_name=kepler, io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., version=9.4, io.openshift.expose-services=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, managed_by=edpm_ansible, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, config_id=edpm, distribution-scope=public, name=ubi9)
Dec  1 09:55:20 compute-0 podman[258627]: 2025-12-01 09:55:20.715507882 +0000 UTC m=+0.083796825 container health_status e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 09:55:23 compute-0 nova_compute[189491]: 2025-12-01 09:55:23.471 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:55:25 compute-0 nova_compute[189491]: 2025-12-01 09:55:25.345 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:55:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:55:26.548 106659 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:55:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:55:26.549 106659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:55:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:55:26.549 106659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:55:27 compute-0 podman[258688]: 2025-12-01 09:55:27.702759898 +0000 UTC m=+0.072204321 container health_status 110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, vcs-type=git, managed_by=edpm_ansible, version=9.6, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, container_name=openstack_network_exporter)
Dec  1 09:55:27 compute-0 podman[258689]: 2025-12-01 09:55:27.70240012 +0000 UTC m=+0.069996218 container health_status f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 09:55:28 compute-0 nova_compute[189491]: 2025-12-01 09:55:28.475 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:55:29 compute-0 podman[203700]: time="2025-12-01T09:55:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 09:55:29 compute-0 podman[203700]: @ - - [01/Dec/2025:09:55:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Dec  1 09:55:29 compute-0 podman[203700]: @ - - [01/Dec/2025:09:55:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4814 "" "Go-http-client/1.1"
Dec  1 09:55:30 compute-0 nova_compute[189491]: 2025-12-01 09:55:30.350 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:55:31 compute-0 openstack_network_exporter[205866]: ERROR   09:55:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:55:31 compute-0 openstack_network_exporter[205866]: ERROR   09:55:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:55:31 compute-0 openstack_network_exporter[205866]: ERROR   09:55:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 09:55:31 compute-0 openstack_network_exporter[205866]: ERROR   09:55:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 09:55:31 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:55:31 compute-0 openstack_network_exporter[205866]: ERROR   09:55:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 09:55:31 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:55:31 compute-0 podman[258723]: 2025-12-01 09:55:31.698308454 +0000 UTC m=+0.071767641 container health_status 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 09:55:31 compute-0 podman[258724]: 2025-12-01 09:55:31.73298664 +0000 UTC m=+0.103031184 container health_status 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  1 09:55:33 compute-0 nova_compute[189491]: 2025-12-01 09:55:33.475 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:55:35 compute-0 nova_compute[189491]: 2025-12-01 09:55:35.352 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:55:38 compute-0 nova_compute[189491]: 2025-12-01 09:55:38.477 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:55:40 compute-0 nova_compute[189491]: 2025-12-01 09:55:40.354 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:55:41 compute-0 podman[258768]: 2025-12-01 09:55:41.710197218 +0000 UTC m=+0.079666524 container health_status 6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  1 09:55:41 compute-0 podman[258769]: 2025-12-01 09:55:41.753140535 +0000 UTC m=+0.101501806 container health_status ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  1 09:55:43 compute-0 nova_compute[189491]: 2025-12-01 09:55:43.480 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:55:45 compute-0 nova_compute[189491]: 2025-12-01 09:55:45.357 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:55:45 compute-0 nova_compute[189491]: 2025-12-01 09:55:45.716 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:55:45 compute-0 nova_compute[189491]: 2025-12-01 09:55:45.717 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 09:55:46 compute-0 nova_compute[189491]: 2025-12-01 09:55:46.350 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "refresh_cache-be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 09:55:46 compute-0 nova_compute[189491]: 2025-12-01 09:55:46.351 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquired lock "refresh_cache-be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 09:55:46 compute-0 nova_compute[189491]: 2025-12-01 09:55:46.352 189495 DEBUG nova.network.neutron [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] [instance: be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  1 09:55:47 compute-0 nova_compute[189491]: 2025-12-01 09:55:47.756 189495 DEBUG nova.network.neutron [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] [instance: be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2] Updating instance_info_cache with network_info: [{"id": "01cbdc1d-a86f-411f-a8e1-8a4166f063d3", "address": "fa:16:3e:37:35:95", "network": {"id": "cf0577af-a5ed-496f-aa24-ae4d86898e85", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.35", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d5294cc5ac64b22a4a0f770b8d8bc61", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01cbdc1d-a8", "ovs_interfaceid": "01cbdc1d-a86f-411f-a8e1-8a4166f063d3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 09:55:47 compute-0 nova_compute[189491]: 2025-12-01 09:55:47.775 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Releasing lock "refresh_cache-be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 09:55:47 compute-0 nova_compute[189491]: 2025-12-01 09:55:47.776 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] [instance: be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  1 09:55:48 compute-0 nova_compute[189491]: 2025-12-01 09:55:48.483 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:55:50 compute-0 nova_compute[189491]: 2025-12-01 09:55:50.361 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:55:51 compute-0 podman[258814]: 2025-12-01 09:55:51.69826772 +0000 UTC m=+0.067511238 container health_status dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  1 09:55:51 compute-0 podman[258815]: 2025-12-01 09:55:51.71139387 +0000 UTC m=+0.076392734 container health_status e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec  1 09:55:51 compute-0 podman[258816]: 2025-12-01 09:55:51.750597557 +0000 UTC m=+0.108890747 container health_status f2d8639519b361376780abb83cb5a5e341c4f412720fb5852b2a4d261a69c359 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, name=ubi9, config_id=edpm, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, release=1214.1726694543, release-0.7.12=, vcs-type=git, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., com.redhat.component=ubi9-container, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4)
Dec  1 09:55:53 compute-0 nova_compute[189491]: 2025-12-01 09:55:53.486 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:55:55 compute-0 nova_compute[189491]: 2025-12-01 09:55:55.364 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:55:57 compute-0 nova_compute[189491]: 2025-12-01 09:55:57.714 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:55:57 compute-0 nova_compute[189491]: 2025-12-01 09:55:57.714 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:55:58 compute-0 nova_compute[189491]: 2025-12-01 09:55:58.174 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:55:58 compute-0 nova_compute[189491]: 2025-12-01 09:55:58.174 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:55:58 compute-0 nova_compute[189491]: 2025-12-01 09:55:58.175 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:55:58 compute-0 nova_compute[189491]: 2025-12-01 09:55:58.175 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 09:55:58 compute-0 nova_compute[189491]: 2025-12-01 09:55:58.284 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:55:58 compute-0 nova_compute[189491]: 2025-12-01 09:55:58.354 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:55:58 compute-0 nova_compute[189491]: 2025-12-01 09:55:58.363 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:55:58 compute-0 nova_compute[189491]: 2025-12-01 09:55:58.443 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:55:58 compute-0 nova_compute[189491]: 2025-12-01 09:55:58.459 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:55:58 compute-0 nova_compute[189491]: 2025-12-01 09:55:58.492 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:55:58 compute-0 nova_compute[189491]: 2025-12-01 09:55:58.542 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:55:58 compute-0 nova_compute[189491]: 2025-12-01 09:55:58.546 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:55:58 compute-0 nova_compute[189491]: 2025-12-01 09:55:58.622 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:55:58 compute-0 podman[258883]: 2025-12-01 09:55:58.719810841 +0000 UTC m=+0.091690177 container health_status 110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, vcs-type=git, architecture=x86_64, io.openshift.expose-services=, release=1755695350, distribution-scope=public, io.buildah.version=1.33.7, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., version=9.6, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, name=ubi9-minimal, config_id=edpm, container_name=openstack_network_exporter)
Dec  1 09:55:58 compute-0 podman[258885]: 2025-12-01 09:55:58.744702128 +0000 UTC m=+0.114180856 container health_status f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec  1 09:55:59 compute-0 nova_compute[189491]: 2025-12-01 09:55:59.038 189495 WARNING nova.virt.libvirt.driver [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 09:55:59 compute-0 nova_compute[189491]: 2025-12-01 09:55:59.041 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4874MB free_disk=72.24847412109375GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 09:55:59 compute-0 nova_compute[189491]: 2025-12-01 09:55:59.042 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:55:59 compute-0 nova_compute[189491]: 2025-12-01 09:55:59.043 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:55:59 compute-0 nova_compute[189491]: 2025-12-01 09:55:59.152 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Instance dc0d510c-4baf-4bcb-ab4f-de6ee48849c0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 09:55:59 compute-0 nova_compute[189491]: 2025-12-01 09:55:59.152 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Instance be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 09:55:59 compute-0 nova_compute[189491]: 2025-12-01 09:55:59.153 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 09:55:59 compute-0 nova_compute[189491]: 2025-12-01 09:55:59.153 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 09:55:59 compute-0 nova_compute[189491]: 2025-12-01 09:55:59.214 189495 DEBUG nova.compute.provider_tree [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Inventory has not changed in ProviderTree for provider: 143c7fe7-af1f-477a-978c-6a994d785d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 09:55:59 compute-0 nova_compute[189491]: 2025-12-01 09:55:59.231 189495 DEBUG nova.scheduler.client.report [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Inventory has not changed for provider 143c7fe7-af1f-477a-978c-6a994d785d98 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 09:55:59 compute-0 nova_compute[189491]: 2025-12-01 09:55:59.233 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 09:55:59 compute-0 nova_compute[189491]: 2025-12-01 09:55:59.233 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.191s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:55:59 compute-0 podman[203700]: time="2025-12-01T09:55:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 09:55:59 compute-0 podman[203700]: @ - - [01/Dec/2025:09:55:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Dec  1 09:55:59 compute-0 podman[203700]: @ - - [01/Dec/2025:09:55:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4808 "" "Go-http-client/1.1"
Dec  1 09:56:00 compute-0 nova_compute[189491]: 2025-12-01 09:56:00.228 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:56:00 compute-0 nova_compute[189491]: 2025-12-01 09:56:00.369 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:56:00 compute-0 nova_compute[189491]: 2025-12-01 09:56:00.714 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:56:01 compute-0 openstack_network_exporter[205866]: ERROR   09:56:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:56:01 compute-0 openstack_network_exporter[205866]: ERROR   09:56:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:56:01 compute-0 openstack_network_exporter[205866]: ERROR   09:56:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 09:56:01 compute-0 openstack_network_exporter[205866]: ERROR   09:56:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 09:56:01 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:56:01 compute-0 openstack_network_exporter[205866]: ERROR   09:56:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 09:56:01 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:56:01 compute-0 nova_compute[189491]: 2025-12-01 09:56:01.713 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:56:01 compute-0 nova_compute[189491]: 2025-12-01 09:56:01.714 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 09:56:02 compute-0 nova_compute[189491]: 2025-12-01 09:56:02.714 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:56:02 compute-0 podman[258924]: 2025-12-01 09:56:02.738446938 +0000 UTC m=+0.103864094 container health_status 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 09:56:02 compute-0 podman[258925]: 2025-12-01 09:56:02.757420751 +0000 UTC m=+0.116327028 container health_status 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  1 09:56:03 compute-0 nova_compute[189491]: 2025-12-01 09:56:03.490 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:56:05 compute-0 nova_compute[189491]: 2025-12-01 09:56:05.373 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:56:07 compute-0 nova_compute[189491]: 2025-12-01 09:56:07.719 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:56:08 compute-0 nova_compute[189491]: 2025-12-01 09:56:08.493 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:56:08 compute-0 nova_compute[189491]: 2025-12-01 09:56:08.715 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:56:10 compute-0 nova_compute[189491]: 2025-12-01 09:56:10.377 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:56:11 compute-0 nova_compute[189491]: 2025-12-01 09:56:11.711 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:56:12 compute-0 podman[258970]: 2025-12-01 09:56:12.717723501 +0000 UTC m=+0.083107649 container health_status 6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  1 09:56:12 compute-0 podman[258971]: 2025-12-01 09:56:12.729726413 +0000 UTC m=+0.089851512 container health_status ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image)
Dec  1 09:56:13 compute-0 nova_compute[189491]: 2025-12-01 09:56:13.496 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:56:15 compute-0 nova_compute[189491]: 2025-12-01 09:56:15.382 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:56:18 compute-0 nova_compute[189491]: 2025-12-01 09:56:18.497 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:56:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:19.797 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  1 09:56:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:19.798 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  1 09:56:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:19.799 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b020>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84ca1cef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:56:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:19.800 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7ff84c98b0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:56:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:19.800 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84dc55100>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84ca1cef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:56:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:19.801 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84ca1cef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:56:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:19.801 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84ca1cef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:56:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:19.801 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84ca1cef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:56:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:19.801 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84ca1c260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84ca1cef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:56:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:19.802 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84ca1cef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:56:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:19.802 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84ca1cef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:56:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:19.802 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84ff01af0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84ca1cef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:56:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:19.803 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bb30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84ca1cef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:56:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:19.804 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84ca1cef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:56:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:19.804 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bb60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84ca1cef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:56:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:19.805 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84ca1cef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:56:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:19.805 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2', 'name': 'te-8664732-asg-zzzrimsgcaeu-wsvolr2mhgm2-s6bg7htmycz5', 'flavor': {'id': '422f041c-a187-4aa2-8167-37f3eb0e89c2', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '280f4e4d-4a12-4164-a687-6106a9afc7fe'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000f', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '6d5294cc5ac64b22a4a0f770b8d8bc61', 'user_id': 'c54f3a4a232b4a739be88e97f2094d4f', 'hostId': 'b9c6fdac1e98b24aca6852a4c44644f8d936ac2e3843f1f4b4c15406', 'status': 'active', 'metadata': {'metering.server_group': 'e03937ad-4d2d-4edc-9b33-ed8d878566ca'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  1 09:56:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:19.806 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bbc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84ca1cef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:56:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:19.806 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84ca1cef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:56:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:19.807 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84ca1cef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:56:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:19.807 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84ca1cef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:56:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:19.807 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84ca1cef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:56:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:19.807 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b5f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84ca1cef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:56:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:19.808 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84ca1cef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:56:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:19.808 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98be60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84ca1cef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:56:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:19.808 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84f216690>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84ca1cef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:56:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:19.809 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c9896d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84ca1cef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:56:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:19.809 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84ca1cef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:56:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:19.809 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'dc0d510c-4baf-4bcb-ab4f-de6ee48849c0', 'name': 'te-8664732-asg-zzzrimsgcaeu-gnecnnuukmep-lujrpewlzjs2', 'flavor': {'id': '422f041c-a187-4aa2-8167-37f3eb0e89c2', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '280f4e4d-4a12-4164-a687-6106a9afc7fe'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000b', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '6d5294cc5ac64b22a4a0f770b8d8bc61', 'user_id': 'c54f3a4a232b4a739be88e97f2094d4f', 'hostId': 'b9c6fdac1e98b24aca6852a4c44644f8d936ac2e3843f1f4b4c15406', 'status': 'active', 'metadata': {'metering.server_group': 'e03937ad-4d2d-4edc-9b33-ed8d878566ca'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  1 09:56:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:19.913 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  1 09:56:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:19.913 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b020>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:56:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:19.913 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b020>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:56:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:19.913 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:56:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:19.810 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98a720>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84ca1cef0>] with cache [{'inspect_disks': {}}], pollster history [{'disk.device.read.bytes': [<NovaLikeServer: te-8664732-asg-zzzrimsgcaeu-wsvolr2mhgm2-s6bg7htmycz5>, <NovaLikeServer: te-8664732-asg-zzzrimsgcaeu-gnecnnuukmep-lujrpewlzjs2>]}], and discovery cache [{'local_instances': [<NovaLikeServer: te-8664732-asg-zzzrimsgcaeu-wsvolr2mhgm2-s6bg7htmycz5>, <NovaLikeServer: te-8664732-asg-zzzrimsgcaeu-gnecnnuukmep-lujrpewlzjs2>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:56:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:19.914 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84ca1cef0>] with cache [{'inspect_disks': {}}], pollster history [{'disk.device.read.bytes': [<NovaLikeServer: te-8664732-asg-zzzrimsgcaeu-wsvolr2mhgm2-s6bg7htmycz5>, <NovaLikeServer: te-8664732-asg-zzzrimsgcaeu-gnecnnuukmep-lujrpewlzjs2>]}], and discovery cache [{'local_instances': [<NovaLikeServer: te-8664732-asg-zzzrimsgcaeu-wsvolr2mhgm2-s6bg7htmycz5>, <NovaLikeServer: te-8664732-asg-zzzrimsgcaeu-gnecnnuukmep-lujrpewlzjs2>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:56:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:19.914 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-01T09:56:19.913682) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:56:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:19.958 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk.device.read.bytes volume: 31070720 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:56:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:19.959 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.001 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.device.read.bytes volume: 31078912 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.002 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.002 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.002 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7ff8501e1d00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.003 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.003 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84dc55100>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.003 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84dc55100>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.003 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.003 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-01T09:56:20.003336) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.019 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk.device.allocation volume: 30154752 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.020 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.035 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.device.allocation volume: 30154752 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.035 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.035 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.036 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7ff84c98b110>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.036 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.036 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b140>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.036 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b140>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.036 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.036 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk.device.read.latency volume: 555739014 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.036 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk.device.read.latency volume: 127763752 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.037 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.device.read.latency volume: 558901098 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.037 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.device.read.latency volume: 60948895 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.037 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-01T09:56:20.036517) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.038 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.038 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7ff84c98b1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.038 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.038 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.038 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.038 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.038 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk.device.usage volume: 30081024 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.039 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.039 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-01T09:56:20.038562) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.039 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.device.usage volume: 30081024 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.039 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.039 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.040 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7ff84c98b200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.040 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.040 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.040 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.040 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.040 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk.device.write.bytes volume: 73187328 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.041 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.041 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.device.write.bytes volume: 73191424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.041 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.041 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.042 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7ff84ca1c230>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.042 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-01T09:56:20.040408) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.042 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.042 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84ca1c260>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.042 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84ca1c260>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.042 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.042 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-01T09:56:20.042430) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.060 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.085 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.086 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.086 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7ff84c98b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.086 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.086 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.086 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.086 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.086 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk.device.write.latency volume: 2476090146 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.086 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.087 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.device.write.latency volume: 3075326058 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.087 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-01T09:56:20.086483) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.087 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.087 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.087 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7ff84c98b2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.087 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.087 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.088 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.088 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.088 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk.device.write.requests volume: 334 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.088 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.088 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.device.write.requests volume: 337 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.089 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.089 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.089 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7ff84c98b620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.089 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.089 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84ff01af0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.090 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84ff01af0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.090 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.090 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-01T09:56:20.088096) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.090 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-01T09:56:20.090155) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.095 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/network.incoming.bytes volume: 1976 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.099 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/network.incoming.bytes volume: 2150 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.100 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.100 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7ff84c98b680>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.100 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.100 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7ff84c98b320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.100 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.100 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.101 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.101 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.101 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.101 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7ff84c98b920>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.101 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.102 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bb60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.102 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bb60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.102 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.102 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.102 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/network.incoming.packets volume: 28 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.102 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-01T09:56:20.101143) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.102 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-01T09:56:20.102302) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.103 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.103 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7ff84c98b380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.103 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.103 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b3b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.103 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b3b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.103 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.103 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.103 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7ff84c98bb90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.104 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.104 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bbc0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.104 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bbc0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.104 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.104 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.104 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.105 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.105 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7ff84c98bbf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.105 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.105 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bc20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.105 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bc20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.105 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.105 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.106 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.106 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.106 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7ff84c98bc80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.106 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.106 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bcb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.106 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bcb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.106 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.107 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.107 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.107 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.107 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7ff84c98bd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.107 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.107 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bd40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.108 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bd40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.108 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-01T09:56:20.103427) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.108 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.108 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-01T09:56:20.104309) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.108 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/network.outgoing.bytes.delta volume: 630 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.108 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-01T09:56:20.105638) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.108 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.108 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-01T09:56:20.106885) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.108 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.108 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7ff84c98bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.109 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.109 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7ff84c98b5c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.109 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.109 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b5f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.109 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b5f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.109 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.109 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/memory.usage volume: 42.31640625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.109 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-01T09:56:20.108280) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.109 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-01T09:56:20.109539) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.110 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/memory.usage volume: 42.47265625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.110 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.110 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7ff84dc55040>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.110 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.110 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b650>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.110 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b650>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.110 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.111 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.111 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-01T09:56:20.110785) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.111 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/network.incoming.bytes.delta volume: 630 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.111 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.111 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7ff84c98be30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.111 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.111 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98be60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.111 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98be60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.112 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.112 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.112 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-01T09:56:20.112074) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.112 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.112 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.112 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7ff8503b1b80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.113 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.113 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84f216690>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.113 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84f216690>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.113 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.113 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/cpu volume: 334460000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.113 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/cpu volume: 338370000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.113 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.114 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7ff84dab3f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.114 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.114 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c9896d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.114 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c9896d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.114 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.114 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.114 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-01T09:56:20.113231) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.115 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-01T09:56:20.114629) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.115 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.115 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.115 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.116 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.116 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7ff84c98bec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.116 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.116 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bef0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.116 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bef0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.117 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.118 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.118 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-01T09:56:20.117610) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.118 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.118 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.119 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7ff84c98b170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.119 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.119 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98a720>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.119 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98a720>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.119 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.119 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk.device.read.requests volume: 1136 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.120 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-01T09:56:20.119596) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.120 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.120 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.device.read.requests volume: 1138 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.120 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.121 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.121 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7ff84c98bf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.121 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.121 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bf80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.121 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bf80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.121 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.121 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.121 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-01T09:56:20.121659) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.122 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.122 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.123 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.123 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.123 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.123 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.123 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.123 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.123 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.124 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.124 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.124 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.124 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.124 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.124 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.124 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.124 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.124 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.124 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.125 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.125 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.125 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.125 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.125 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.125 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.125 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.125 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:56:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:56:20.125 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:56:20 compute-0 nova_compute[189491]: 2025-12-01 09:56:20.385 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:56:22 compute-0 podman[259013]: 2025-12-01 09:56:22.706716663 +0000 UTC m=+0.074506958 container health_status e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 09:56:22 compute-0 podman[259014]: 2025-12-01 09:56:22.707655766 +0000 UTC m=+0.072796587 container health_status f2d8639519b361376780abb83cb5a5e341c4f412720fb5852b2a4d261a69c359 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, maintainer=Red Hat, Inc., managed_by=edpm_ansible, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, architecture=x86_64, io.buildah.version=1.29.0, io.openshift.tags=base rhel9)
Dec  1 09:56:22 compute-0 podman[259012]: 2025-12-01 09:56:22.717743882 +0000 UTC m=+0.089099975 container health_status dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 09:56:23 compute-0 nova_compute[189491]: 2025-12-01 09:56:23.499 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:56:25 compute-0 nova_compute[189491]: 2025-12-01 09:56:25.390 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:56:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:56:26.549 106659 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:56:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:56:26.550 106659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:56:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:56:26.551 106659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:56:28 compute-0 nova_compute[189491]: 2025-12-01 09:56:28.502 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:56:29 compute-0 podman[259073]: 2025-12-01 09:56:29.701495902 +0000 UTC m=+0.071269759 container health_status f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 09:56:29 compute-0 podman[259072]: 2025-12-01 09:56:29.702066546 +0000 UTC m=+0.077211054 container health_status 110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.openshift.expose-services=, release=1755695350, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, version=9.6, container_name=openstack_network_exporter, vcs-type=git, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Dec  1 09:56:29 compute-0 podman[203700]: time="2025-12-01T09:56:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 09:56:29 compute-0 podman[203700]: @ - - [01/Dec/2025:09:56:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Dec  1 09:56:29 compute-0 podman[203700]: @ - - [01/Dec/2025:09:56:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4812 "" "Go-http-client/1.1"
Dec  1 09:56:30 compute-0 nova_compute[189491]: 2025-12-01 09:56:30.394 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:56:31 compute-0 openstack_network_exporter[205866]: ERROR   09:56:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:56:31 compute-0 openstack_network_exporter[205866]: ERROR   09:56:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 09:56:31 compute-0 openstack_network_exporter[205866]: ERROR   09:56:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 09:56:31 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:56:31 compute-0 openstack_network_exporter[205866]: ERROR   09:56:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 09:56:31 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:56:31 compute-0 openstack_network_exporter[205866]: ERROR   09:56:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:56:33 compute-0 nova_compute[189491]: 2025-12-01 09:56:33.505 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:56:33 compute-0 podman[259112]: 2025-12-01 09:56:33.724439356 +0000 UTC m=+0.096947565 container health_status 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3)
Dec  1 09:56:33 compute-0 podman[259113]: 2025-12-01 09:56:33.749236361 +0000 UTC m=+0.118619424 container health_status 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true)
Dec  1 09:56:35 compute-0 nova_compute[189491]: 2025-12-01 09:56:35.398 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:56:38 compute-0 nova_compute[189491]: 2025-12-01 09:56:38.508 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:56:40 compute-0 nova_compute[189491]: 2025-12-01 09:56:40.402 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:56:43 compute-0 nova_compute[189491]: 2025-12-01 09:56:43.510 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:56:43 compute-0 podman[259152]: 2025-12-01 09:56:43.714516935 +0000 UTC m=+0.087959366 container health_status 6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  1 09:56:43 compute-0 podman[259153]: 2025-12-01 09:56:43.724934269 +0000 UTC m=+0.094890144 container health_status ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=edpm, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image)
Dec  1 09:56:45 compute-0 nova_compute[189491]: 2025-12-01 09:56:45.405 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:56:47 compute-0 nova_compute[189491]: 2025-12-01 09:56:47.714 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:56:47 compute-0 nova_compute[189491]: 2025-12-01 09:56:47.715 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 09:56:47 compute-0 nova_compute[189491]: 2025-12-01 09:56:47.716 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 09:56:48 compute-0 nova_compute[189491]: 2025-12-01 09:56:48.026 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "refresh_cache-dc0d510c-4baf-4bcb-ab4f-de6ee48849c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 09:56:48 compute-0 nova_compute[189491]: 2025-12-01 09:56:48.026 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquired lock "refresh_cache-dc0d510c-4baf-4bcb-ab4f-de6ee48849c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 09:56:48 compute-0 nova_compute[189491]: 2025-12-01 09:56:48.027 189495 DEBUG nova.network.neutron [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] [instance: dc0d510c-4baf-4bcb-ab4f-de6ee48849c0] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  1 09:56:48 compute-0 nova_compute[189491]: 2025-12-01 09:56:48.027 189495 DEBUG nova.objects.instance [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lazy-loading 'info_cache' on Instance uuid dc0d510c-4baf-4bcb-ab4f-de6ee48849c0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 09:56:48 compute-0 nova_compute[189491]: 2025-12-01 09:56:48.512 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:56:49 compute-0 nova_compute[189491]: 2025-12-01 09:56:49.827 189495 DEBUG nova.network.neutron [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] [instance: dc0d510c-4baf-4bcb-ab4f-de6ee48849c0] Updating instance_info_cache with network_info: [{"id": "e1536dee-e9fa-499f-9e7a-2b2a0ecce586", "address": "fa:16:3e:50:a8:e2", "network": {"id": "cf0577af-a5ed-496f-aa24-ae4d86898e85", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.156", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d5294cc5ac64b22a4a0f770b8d8bc61", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1536dee-e9", "ovs_interfaceid": "e1536dee-e9fa-499f-9e7a-2b2a0ecce586", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 09:56:49 compute-0 nova_compute[189491]: 2025-12-01 09:56:49.936 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Releasing lock "refresh_cache-dc0d510c-4baf-4bcb-ab4f-de6ee48849c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 09:56:49 compute-0 nova_compute[189491]: 2025-12-01 09:56:49.937 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] [instance: dc0d510c-4baf-4bcb-ab4f-de6ee48849c0] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  1 09:56:50 compute-0 nova_compute[189491]: 2025-12-01 09:56:50.410 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:56:53 compute-0 nova_compute[189491]: 2025-12-01 09:56:53.515 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:56:53 compute-0 podman[259201]: 2025-12-01 09:56:53.690286624 +0000 UTC m=+0.061427279 container health_status e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  1 09:56:53 compute-0 podman[259202]: 2025-12-01 09:56:53.709538674 +0000 UTC m=+0.070635045 container health_status f2d8639519b361376780abb83cb5a5e341c4f412720fb5852b2a4d261a69c359 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, managed_by=edpm_ansible, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, vcs-type=git, config_id=edpm, build-date=2024-09-18T21:23:30, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.expose-services=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec  1 09:56:53 compute-0 podman[259200]: 2025-12-01 09:56:53.728460345 +0000 UTC m=+0.099216371 container health_status dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 09:56:55 compute-0 nova_compute[189491]: 2025-12-01 09:56:55.414 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:56:58 compute-0 nova_compute[189491]: 2025-12-01 09:56:58.517 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:56:58 compute-0 nova_compute[189491]: 2025-12-01 09:56:58.713 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:56:58 compute-0 nova_compute[189491]: 2025-12-01 09:56:58.714 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:56:59 compute-0 nova_compute[189491]: 2025-12-01 09:56:59.714 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:56:59 compute-0 podman[203700]: time="2025-12-01T09:56:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 09:56:59 compute-0 nova_compute[189491]: 2025-12-01 09:56:59.740 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:56:59 compute-0 nova_compute[189491]: 2025-12-01 09:56:59.741 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:56:59 compute-0 nova_compute[189491]: 2025-12-01 09:56:59.741 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:56:59 compute-0 nova_compute[189491]: 2025-12-01 09:56:59.741 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 09:56:59 compute-0 podman[203700]: @ - - [01/Dec/2025:09:56:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Dec  1 09:56:59 compute-0 podman[203700]: @ - - [01/Dec/2025:09:56:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4813 "" "Go-http-client/1.1"
Dec  1 09:56:59 compute-0 nova_compute[189491]: 2025-12-01 09:56:59.824 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:56:59 compute-0 nova_compute[189491]: 2025-12-01 09:56:59.926 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk --force-share --output=json" returned: 0 in 0.102s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:56:59 compute-0 nova_compute[189491]: 2025-12-01 09:56:59.927 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:56:59 compute-0 nova_compute[189491]: 2025-12-01 09:56:59.988 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:56:59 compute-0 nova_compute[189491]: 2025-12-01 09:56:59.997 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:57:00 compute-0 nova_compute[189491]: 2025-12-01 09:57:00.069 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:57:00 compute-0 nova_compute[189491]: 2025-12-01 09:57:00.070 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:57:00 compute-0 nova_compute[189491]: 2025-12-01 09:57:00.134 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:57:00 compute-0 nova_compute[189491]: 2025-12-01 09:57:00.417 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:57:00 compute-0 nova_compute[189491]: 2025-12-01 09:57:00.486 189495 WARNING nova.virt.libvirt.driver [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 09:57:00 compute-0 nova_compute[189491]: 2025-12-01 09:57:00.487 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4858MB free_disk=72.24858093261719GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 09:57:00 compute-0 nova_compute[189491]: 2025-12-01 09:57:00.487 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:57:00 compute-0 nova_compute[189491]: 2025-12-01 09:57:00.488 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:57:00 compute-0 nova_compute[189491]: 2025-12-01 09:57:00.557 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Instance dc0d510c-4baf-4bcb-ab4f-de6ee48849c0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 09:57:00 compute-0 nova_compute[189491]: 2025-12-01 09:57:00.558 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Instance be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 09:57:00 compute-0 nova_compute[189491]: 2025-12-01 09:57:00.558 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 09:57:00 compute-0 nova_compute[189491]: 2025-12-01 09:57:00.559 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 09:57:00 compute-0 nova_compute[189491]: 2025-12-01 09:57:00.621 189495 DEBUG nova.compute.provider_tree [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Inventory has not changed in ProviderTree for provider: 143c7fe7-af1f-477a-978c-6a994d785d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 09:57:00 compute-0 nova_compute[189491]: 2025-12-01 09:57:00.638 189495 DEBUG nova.scheduler.client.report [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Inventory has not changed for provider 143c7fe7-af1f-477a-978c-6a994d785d98 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 09:57:00 compute-0 nova_compute[189491]: 2025-12-01 09:57:00.639 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 09:57:00 compute-0 nova_compute[189491]: 2025-12-01 09:57:00.639 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.151s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:57:00 compute-0 podman[259276]: 2025-12-01 09:57:00.695669801 +0000 UTC m=+0.066612655 container health_status 110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, version=9.6, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, architecture=x86_64, maintainer=Red Hat, Inc., name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public)
Dec  1 09:57:00 compute-0 podman[259277]: 2025-12-01 09:57:00.724649708 +0000 UTC m=+0.090776305 container health_status f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec  1 09:57:01 compute-0 openstack_network_exporter[205866]: ERROR   09:57:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:57:01 compute-0 openstack_network_exporter[205866]: ERROR   09:57:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:57:01 compute-0 openstack_network_exporter[205866]: ERROR   09:57:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 09:57:01 compute-0 openstack_network_exporter[205866]: ERROR   09:57:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 09:57:01 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:57:01 compute-0 openstack_network_exporter[205866]: ERROR   09:57:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 09:57:01 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:57:03 compute-0 nova_compute[189491]: 2025-12-01 09:57:03.519 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:57:03 compute-0 nova_compute[189491]: 2025-12-01 09:57:03.639 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:57:03 compute-0 nova_compute[189491]: 2025-12-01 09:57:03.713 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:57:03 compute-0 nova_compute[189491]: 2025-12-01 09:57:03.714 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:57:03 compute-0 nova_compute[189491]: 2025-12-01 09:57:03.714 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 09:57:04 compute-0 podman[259314]: 2025-12-01 09:57:04.691732576 +0000 UTC m=+0.071173837 container health_status 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  1 09:57:04 compute-0 podman[259315]: 2025-12-01 09:57:04.750576521 +0000 UTC m=+0.126301611 container health_status 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec  1 09:57:05 compute-0 nova_compute[189491]: 2025-12-01 09:57:05.422 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:57:07 compute-0 nova_compute[189491]: 2025-12-01 09:57:07.714 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:57:08 compute-0 nova_compute[189491]: 2025-12-01 09:57:08.522 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:57:08 compute-0 nova_compute[189491]: 2025-12-01 09:57:08.713 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:57:10 compute-0 nova_compute[189491]: 2025-12-01 09:57:10.426 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:57:13 compute-0 nova_compute[189491]: 2025-12-01 09:57:13.525 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:57:14 compute-0 podman[259359]: 2025-12-01 09:57:14.705765127 +0000 UTC m=+0.081146990 container health_status ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=edpm, io.buildah.version=1.41.4, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec  1 09:57:14 compute-0 podman[259358]: 2025-12-01 09:57:14.732094309 +0000 UTC m=+0.110358392 container health_status 6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  1 09:57:15 compute-0 nova_compute[189491]: 2025-12-01 09:57:15.430 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:57:18 compute-0 nova_compute[189491]: 2025-12-01 09:57:18.528 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:57:20 compute-0 nova_compute[189491]: 2025-12-01 09:57:20.434 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:57:23 compute-0 nova_compute[189491]: 2025-12-01 09:57:23.531 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:57:24 compute-0 podman[259399]: 2025-12-01 09:57:24.713430807 +0000 UTC m=+0.076828345 container health_status e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_ipmi, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec  1 09:57:24 compute-0 podman[259398]: 2025-12-01 09:57:24.725191474 +0000 UTC m=+0.093077521 container health_status dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  1 09:57:24 compute-0 podman[259400]: 2025-12-01 09:57:24.737182437 +0000 UTC m=+0.096963016 container health_status f2d8639519b361376780abb83cb5a5e341c4f412720fb5852b2a4d261a69c359 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., managed_by=edpm_ansible, version=9.4, config_id=edpm, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, container_name=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, io.buildah.version=1.29.0, build-date=2024-09-18T21:23:30, release-0.7.12=, io.openshift.tags=base rhel9, distribution-scope=public, name=ubi9, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.expose-services=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  1 09:57:25 compute-0 nova_compute[189491]: 2025-12-01 09:57:25.439 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:57:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:57:26.550 106659 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:57:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:57:26.551 106659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:57:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:57:26.552 106659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:57:28 compute-0 nova_compute[189491]: 2025-12-01 09:57:28.534 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:57:29 compute-0 podman[203700]: time="2025-12-01T09:57:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 09:57:29 compute-0 podman[203700]: @ - - [01/Dec/2025:09:57:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Dec  1 09:57:29 compute-0 podman[203700]: @ - - [01/Dec/2025:09:57:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4811 "" "Go-http-client/1.1"
Dec  1 09:57:30 compute-0 nova_compute[189491]: 2025-12-01 09:57:30.442 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:57:31 compute-0 openstack_network_exporter[205866]: ERROR   09:57:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:57:31 compute-0 openstack_network_exporter[205866]: ERROR   09:57:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:57:31 compute-0 openstack_network_exporter[205866]: ERROR   09:57:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 09:57:31 compute-0 openstack_network_exporter[205866]: ERROR   09:57:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 09:57:31 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:57:31 compute-0 openstack_network_exporter[205866]: ERROR   09:57:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 09:57:31 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:57:31 compute-0 podman[259457]: 2025-12-01 09:57:31.697214119 +0000 UTC m=+0.060661340 container health_status f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team)
Dec  1 09:57:31 compute-0 podman[259456]: 2025-12-01 09:57:31.71734384 +0000 UTC m=+0.085766463 container health_status 110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, version=9.6, architecture=x86_64, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., container_name=openstack_network_exporter, maintainer=Red Hat, Inc., name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible)
Dec  1 09:57:33 compute-0 nova_compute[189491]: 2025-12-01 09:57:33.535 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:57:35 compute-0 nova_compute[189491]: 2025-12-01 09:57:35.445 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:57:35 compute-0 podman[259492]: 2025-12-01 09:57:35.706754961 +0000 UTC m=+0.075365909 container health_status 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  1 09:57:35 compute-0 podman[259493]: 2025-12-01 09:57:35.764450898 +0000 UTC m=+0.124898707 container health_status 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  1 09:57:38 compute-0 nova_compute[189491]: 2025-12-01 09:57:38.536 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:57:40 compute-0 nova_compute[189491]: 2025-12-01 09:57:40.449 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:57:43 compute-0 nova_compute[189491]: 2025-12-01 09:57:43.538 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:57:43 compute-0 nova_compute[189491]: 2025-12-01 09:57:43.713 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:57:43 compute-0 nova_compute[189491]: 2025-12-01 09:57:43.714 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  1 09:57:45 compute-0 nova_compute[189491]: 2025-12-01 09:57:45.454 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:57:45 compute-0 podman[259535]: 2025-12-01 09:57:45.692387259 +0000 UTC m=+0.068717147 container health_status 6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  1 09:57:45 compute-0 podman[259536]: 2025-12-01 09:57:45.724500442 +0000 UTC m=+0.096293438 container health_status ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, config_id=edpm)
Dec  1 09:57:48 compute-0 nova_compute[189491]: 2025-12-01 09:57:48.541 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:57:48 compute-0 nova_compute[189491]: 2025-12-01 09:57:48.727 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:57:48 compute-0 nova_compute[189491]: 2025-12-01 09:57:48.727 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 09:57:49 compute-0 nova_compute[189491]: 2025-12-01 09:57:49.519 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "refresh_cache-be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 09:57:49 compute-0 nova_compute[189491]: 2025-12-01 09:57:49.520 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquired lock "refresh_cache-be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 09:57:49 compute-0 nova_compute[189491]: 2025-12-01 09:57:49.520 189495 DEBUG nova.network.neutron [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] [instance: be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  1 09:57:50 compute-0 nova_compute[189491]: 2025-12-01 09:57:50.458 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:57:50 compute-0 nova_compute[189491]: 2025-12-01 09:57:50.885 189495 DEBUG nova.network.neutron [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] [instance: be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2] Updating instance_info_cache with network_info: [{"id": "01cbdc1d-a86f-411f-a8e1-8a4166f063d3", "address": "fa:16:3e:37:35:95", "network": {"id": "cf0577af-a5ed-496f-aa24-ae4d86898e85", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.35", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d5294cc5ac64b22a4a0f770b8d8bc61", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01cbdc1d-a8", "ovs_interfaceid": "01cbdc1d-a86f-411f-a8e1-8a4166f063d3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 09:57:50 compute-0 nova_compute[189491]: 2025-12-01 09:57:50.940 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Releasing lock "refresh_cache-be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 09:57:50 compute-0 nova_compute[189491]: 2025-12-01 09:57:50.941 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] [instance: be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  1 09:57:53 compute-0 nova_compute[189491]: 2025-12-01 09:57:53.544 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:57:55 compute-0 nova_compute[189491]: 2025-12-01 09:57:55.461 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:57:55 compute-0 podman[259581]: 2025-12-01 09:57:55.70560115 +0000 UTC m=+0.082829421 container health_status dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  1 09:57:55 compute-0 podman[259583]: 2025-12-01 09:57:55.721285242 +0000 UTC m=+0.088047128 container health_status f2d8639519b361376780abb83cb5a5e341c4f412720fb5852b2a4d261a69c359 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, version=9.4, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, build-date=2024-09-18T21:23:30, name=ubi9, vendor=Red Hat, Inc., io.buildah.version=1.29.0, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., container_name=kepler, release=1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, com.redhat.component=ubi9-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Dec  1 09:57:55 compute-0 podman[259582]: 2025-12-01 09:57:55.72981403 +0000 UTC m=+0.102426959 container health_status e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec  1 09:57:58 compute-0 nova_compute[189491]: 2025-12-01 09:57:58.546 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:57:58 compute-0 nova_compute[189491]: 2025-12-01 09:57:58.713 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:57:59 compute-0 podman[203700]: time="2025-12-01T09:57:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 09:57:59 compute-0 podman[203700]: @ - - [01/Dec/2025:09:57:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Dec  1 09:57:59 compute-0 podman[203700]: @ - - [01/Dec/2025:09:57:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4816 "" "Go-http-client/1.1"
Dec  1 09:58:00 compute-0 nova_compute[189491]: 2025-12-01 09:58:00.466 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:58:00 compute-0 nova_compute[189491]: 2025-12-01 09:58:00.709 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:58:01 compute-0 openstack_network_exporter[205866]: ERROR   09:58:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 09:58:01 compute-0 openstack_network_exporter[205866]: ERROR   09:58:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:58:01 compute-0 openstack_network_exporter[205866]: ERROR   09:58:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:58:01 compute-0 openstack_network_exporter[205866]: ERROR   09:58:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 09:58:01 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:58:01 compute-0 openstack_network_exporter[205866]: ERROR   09:58:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 09:58:01 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:58:01 compute-0 nova_compute[189491]: 2025-12-01 09:58:01.713 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:58:01 compute-0 nova_compute[189491]: 2025-12-01 09:58:01.742 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:58:01 compute-0 nova_compute[189491]: 2025-12-01 09:58:01.743 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:58:01 compute-0 nova_compute[189491]: 2025-12-01 09:58:01.743 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:58:01 compute-0 nova_compute[189491]: 2025-12-01 09:58:01.743 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 09:58:01 compute-0 nova_compute[189491]: 2025-12-01 09:58:01.819 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:58:01 compute-0 nova_compute[189491]: 2025-12-01 09:58:01.918 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:58:01 compute-0 nova_compute[189491]: 2025-12-01 09:58:01.920 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:58:02 compute-0 nova_compute[189491]: 2025-12-01 09:58:02.018 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:58:02 compute-0 nova_compute[189491]: 2025-12-01 09:58:02.027 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:58:02 compute-0 nova_compute[189491]: 2025-12-01 09:58:02.106 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:58:02 compute-0 nova_compute[189491]: 2025-12-01 09:58:02.107 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:58:02 compute-0 nova_compute[189491]: 2025-12-01 09:58:02.165 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:58:02 compute-0 nova_compute[189491]: 2025-12-01 09:58:02.531 189495 WARNING nova.virt.libvirt.driver [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 09:58:02 compute-0 nova_compute[189491]: 2025-12-01 09:58:02.532 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4863MB free_disk=72.24858093261719GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 09:58:02 compute-0 nova_compute[189491]: 2025-12-01 09:58:02.533 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:58:02 compute-0 nova_compute[189491]: 2025-12-01 09:58:02.533 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:58:02 compute-0 nova_compute[189491]: 2025-12-01 09:58:02.616 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Instance dc0d510c-4baf-4bcb-ab4f-de6ee48849c0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 09:58:02 compute-0 nova_compute[189491]: 2025-12-01 09:58:02.617 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Instance be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 09:58:02 compute-0 nova_compute[189491]: 2025-12-01 09:58:02.617 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 09:58:02 compute-0 nova_compute[189491]: 2025-12-01 09:58:02.617 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 09:58:02 compute-0 nova_compute[189491]: 2025-12-01 09:58:02.636 189495 DEBUG nova.scheduler.client.report [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Refreshing inventories for resource provider 143c7fe7-af1f-477a-978c-6a994d785d98 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  1 09:58:02 compute-0 nova_compute[189491]: 2025-12-01 09:58:02.656 189495 DEBUG nova.scheduler.client.report [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Updating ProviderTree inventory for provider 143c7fe7-af1f-477a-978c-6a994d785d98 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  1 09:58:02 compute-0 nova_compute[189491]: 2025-12-01 09:58:02.656 189495 DEBUG nova.compute.provider_tree [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Updating inventory in ProviderTree for provider 143c7fe7-af1f-477a-978c-6a994d785d98 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  1 09:58:02 compute-0 nova_compute[189491]: 2025-12-01 09:58:02.670 189495 DEBUG nova.scheduler.client.report [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Refreshing aggregate associations for resource provider 143c7fe7-af1f-477a-978c-6a994d785d98, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  1 09:58:02 compute-0 nova_compute[189491]: 2025-12-01 09:58:02.694 189495 DEBUG nova.scheduler.client.report [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Refreshing trait associations for resource provider 143c7fe7-af1f-477a-978c-6a994d785d98, traits: COMPUTE_STORAGE_BUS_FDC,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_FMA3,HW_CPU_X86_SVM,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_AVX,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SHA,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_AVX2,HW_CPU_X86_ABM,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_USB,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_MMX,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE2,COMPUTE_ACCELERATORS,HW_CPU_X86_F16C,HW_CPU_X86_SSE42,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE41,HW_CPU_X86_BMI2,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE4A,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_CIRRUS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  1 09:58:02 compute-0 podman[259654]: 2025-12-01 09:58:02.698519804 +0000 UTC m=+0.064624586 container health_status f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  1 09:58:02 compute-0 podman[259653]: 2025-12-01 09:58:02.732606836 +0000 UTC m=+0.103746541 container health_status 110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, distribution-scope=public, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, version=9.6, architecture=x86_64, name=ubi9-minimal, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, config_id=edpm)
Dec  1 09:58:02 compute-0 nova_compute[189491]: 2025-12-01 09:58:02.751 189495 DEBUG nova.compute.provider_tree [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Inventory has not changed in ProviderTree for provider: 143c7fe7-af1f-477a-978c-6a994d785d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 09:58:02 compute-0 nova_compute[189491]: 2025-12-01 09:58:02.766 189495 DEBUG nova.scheduler.client.report [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Inventory has not changed for provider 143c7fe7-af1f-477a-978c-6a994d785d98 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 09:58:02 compute-0 nova_compute[189491]: 2025-12-01 09:58:02.768 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 09:58:02 compute-0 nova_compute[189491]: 2025-12-01 09:58:02.768 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.235s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:58:02 compute-0 nova_compute[189491]: 2025-12-01 09:58:02.769 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:58:02 compute-0 nova_compute[189491]: 2025-12-01 09:58:02.769 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec  1 09:58:02 compute-0 nova_compute[189491]: 2025-12-01 09:58:02.785 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec  1 09:58:03 compute-0 nova_compute[189491]: 2025-12-01 09:58:03.549 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:58:04 compute-0 nova_compute[189491]: 2025-12-01 09:58:04.787 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:58:04 compute-0 nova_compute[189491]: 2025-12-01 09:58:04.788 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:58:04 compute-0 nova_compute[189491]: 2025-12-01 09:58:04.788 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:58:04 compute-0 nova_compute[189491]: 2025-12-01 09:58:04.789 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 09:58:05 compute-0 nova_compute[189491]: 2025-12-01 09:58:05.470 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:58:06 compute-0 podman[259688]: 2025-12-01 09:58:06.71823738 +0000 UTC m=+0.073330929 container health_status 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, io.buildah.version=1.41.3)
Dec  1 09:58:06 compute-0 podman[259689]: 2025-12-01 09:58:06.765646307 +0000 UTC m=+0.127573473 container health_status 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 09:58:08 compute-0 nova_compute[189491]: 2025-12-01 09:58:08.552 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:58:08 compute-0 nova_compute[189491]: 2025-12-01 09:58:08.715 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:58:08 compute-0 nova_compute[189491]: 2025-12-01 09:58:08.716 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:58:09 compute-0 nova_compute[189491]: 2025-12-01 09:58:09.727 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:58:10 compute-0 nova_compute[189491]: 2025-12-01 09:58:10.475 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:58:13 compute-0 nova_compute[189491]: 2025-12-01 09:58:13.555 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:58:14 compute-0 nova_compute[189491]: 2025-12-01 09:58:14.709 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:58:15 compute-0 nova_compute[189491]: 2025-12-01 09:58:15.480 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:58:16 compute-0 podman[259734]: 2025-12-01 09:58:16.693312228 +0000 UTC m=+0.064521574 container health_status ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team)
Dec  1 09:58:16 compute-0 podman[259733]: 2025-12-01 09:58:16.713307146 +0000 UTC m=+0.087652719 container health_status 6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  1 09:58:18 compute-0 nova_compute[189491]: 2025-12-01 09:58:18.556 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.798 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.799 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.799 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b020>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c583dd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.800 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7ff84c98b0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.800 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84dc55100>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c583dd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.800 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c583dd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.801 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c583dd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.801 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c583dd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.801 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84ca1c260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c583dd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.801 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c583dd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.802 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c583dd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.804 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84ff01af0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c583dd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.805 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bb30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c583dd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.806 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c583dd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.806 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bb60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c583dd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.806 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c583dd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.807 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bbc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c583dd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.807 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c583dd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.807 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c583dd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.808 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c583dd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.808 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c583dd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.808 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b5f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c583dd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.808 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c583dd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.809 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98be60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c583dd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.809 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84f216690>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c583dd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.809 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c9896d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c583dd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.810 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c583dd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.810 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98a720>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c583dd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.810 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84c583dd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.806 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2', 'name': 'te-8664732-asg-zzzrimsgcaeu-wsvolr2mhgm2-s6bg7htmycz5', 'flavor': {'id': '422f041c-a187-4aa2-8167-37f3eb0e89c2', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '280f4e4d-4a12-4164-a687-6106a9afc7fe'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000f', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '6d5294cc5ac64b22a4a0f770b8d8bc61', 'user_id': 'c54f3a4a232b4a739be88e97f2094d4f', 'hostId': 'b9c6fdac1e98b24aca6852a4c44644f8d936ac2e3843f1f4b4c15406', 'status': 'active', 'metadata': {'metering.server_group': 'e03937ad-4d2d-4edc-9b33-ed8d878566ca'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.814 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'dc0d510c-4baf-4bcb-ab4f-de6ee48849c0', 'name': 'te-8664732-asg-zzzrimsgcaeu-gnecnnuukmep-lujrpewlzjs2', 'flavor': {'id': '422f041c-a187-4aa2-8167-37f3eb0e89c2', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '280f4e4d-4a12-4164-a687-6106a9afc7fe'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000b', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '6d5294cc5ac64b22a4a0f770b8d8bc61', 'user_id': 'c54f3a4a232b4a739be88e97f2094d4f', 'hostId': 'b9c6fdac1e98b24aca6852a4c44644f8d936ac2e3843f1f4b4c15406', 'status': 'active', 'metadata': {'metering.server_group': 'e03937ad-4d2d-4edc-9b33-ed8d878566ca'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.814 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.814 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b020>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.815 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b020>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.815 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.816 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-01T09:58:19.815185) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.852 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk.device.read.bytes volume: 31070720 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.853 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.894 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.device.read.bytes volume: 31078912 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.895 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.895 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.895 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7ff8501e1d00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.896 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.896 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84dc55100>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.896 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84dc55100>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.896 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.896 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-01T09:58:19.896399) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.911 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk.device.allocation volume: 30154752 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.911 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.924 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.device.allocation volume: 30154752 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.924 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.925 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.925 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7ff84c98b110>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.925 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.925 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b140>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.925 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b140>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.925 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.926 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk.device.read.latency volume: 555739014 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.926 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk.device.read.latency volume: 127763752 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.926 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.device.read.latency volume: 558901098 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.926 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.device.read.latency volume: 60948895 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.927 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.927 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7ff84c98b1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.927 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.927 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.927 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.927 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.927 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk.device.usage volume: 30081024 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.928 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.928 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.device.usage volume: 30081024 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.928 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.928 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-01T09:58:19.925870) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.929 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.929 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7ff84c98b200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.929 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.929 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.929 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.929 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.929 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk.device.write.bytes volume: 73191424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.929 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.929 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-01T09:58:19.927696) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.930 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-01T09:58:19.929619) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.930 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.device.write.bytes volume: 73191424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.930 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.930 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.931 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7ff84ca1c230>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.931 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.931 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84ca1c260>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.931 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84ca1c260>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.931 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.931 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-01T09:58:19.931338) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.949 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.968 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.969 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.969 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7ff84c98b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.969 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.969 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.969 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.970 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.970 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk.device.write.latency volume: 2477497831 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.970 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.970 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.device.write.latency volume: 3075326058 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.971 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.971 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.971 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7ff84c98b2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.972 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.972 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.972 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.972 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.972 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk.device.write.requests volume: 335 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.972 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.973 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.device.write.requests volume: 337 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.973 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.973 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-01T09:58:19.970070) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.973 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.974 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-01T09:58:19.972525) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.974 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7ff84c98b620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.974 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.974 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84ff01af0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.974 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84ff01af0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.974 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.974 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-01T09:58:19.974397) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.978 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/network.incoming.bytes volume: 1976 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.982 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/network.incoming.bytes volume: 2150 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.982 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.982 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7ff84c98b680>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.983 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.983 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7ff84c98b320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.983 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.983 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.983 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.983 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.983 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.984 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7ff84c98b920>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.984 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.984 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bb60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.984 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-01T09:58:19.983440) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.984 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bb60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.984 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.984 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.984 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-01T09:58:19.984441) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.984 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/network.incoming.packets volume: 28 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.985 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.985 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7ff84c98b380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.985 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.985 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b3b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.985 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b3b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.985 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.986 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.986 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7ff84c98bb90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.986 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.986 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bbc0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.986 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-01T09:58:19.985601) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.986 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bbc0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.986 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.986 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.986 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.987 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.987 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7ff84c98bbf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.987 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.987 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-01T09:58:19.986592) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.987 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bc20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.987 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bc20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.987 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.988 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.988 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-01T09:58:19.987816) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.988 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.988 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.988 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7ff84c98bc80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.988 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.988 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bcb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.988 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bcb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.989 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.989 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.989 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.989 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-01T09:58:19.989010) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.989 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.990 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7ff84c98bd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.990 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.990 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bd40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.990 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bd40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.990 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.990 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.990 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.991 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.991 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7ff84c98bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.991 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.991 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7ff84c98b5c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.991 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.991 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b5f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.991 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b5f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.991 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-01T09:58:19.990367) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.991 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.991 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/memory.usage volume: 42.3125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.992 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/memory.usage volume: 42.47265625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.992 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.992 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7ff84dc55040>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.992 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.992 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98b650>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.992 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98b650>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.993 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.993 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.993 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.993 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.993 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7ff84c98be30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.993 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.993 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98be60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.994 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98be60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.994 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.994 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-01T09:58:19.991820) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.994 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.994 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-01T09:58:19.993024) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.994 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-01T09:58:19.994102) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.994 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.994 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.994 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7ff8503b1b80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.994 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.995 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84f216690>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.995 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84f216690>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.995 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.995 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/cpu volume: 335740000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.995 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-01T09:58:19.995184) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.995 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/cpu volume: 339630000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.995 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.996 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7ff84dab3f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.996 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.996 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c9896d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.996 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c9896d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.996 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.996 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.996 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-01T09:58:19.996363) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.996 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.997 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.997 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.997 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.997 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7ff84c98bec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.997 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.997 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bef0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.997 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bef0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.998 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.998 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.998 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-01T09:58:19.997959) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.998 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.998 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.998 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7ff84c98b170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.998 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.999 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98a720>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.999 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98a720>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.999 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:58:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.999 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk.device.read.requests volume: 1136 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:58:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.999 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:58:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.999 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-01T09:58:19.999223) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:58:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:19.999 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.device.read.requests volume: 1138 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:58:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:20.000 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:58:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:20.000 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  1 09:58:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:20.000 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7ff84c98bf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 09:58:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:20.001 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  1 09:58:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:20.001 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7ff84c98bf80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 09:58:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:20.001 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7ff84c98bf80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 09:58:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:20.001 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 09:58:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:20.001 14 DEBUG ceilometer.compute.pollsters [-] be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:58:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:20.001 14 DEBUG ceilometer.compute.pollsters [-] dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 09:58:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:20.001 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-01T09:58:20.001268) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 09:58:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:20.002 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  1 09:58:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:20.002 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:58:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:20.002 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:58:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:20.002 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:58:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:20.003 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:58:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:20.003 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:58:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:20.003 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:58:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:20.003 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:58:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:20.003 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:58:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:20.003 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:58:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:20.003 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:58:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:20.003 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:58:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:20.003 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:58:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:20.003 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:58:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:20.003 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:58:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:20.004 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:58:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:20.004 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:58:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:20.004 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:58:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:20.004 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:58:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:20.004 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:58:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:20.004 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:58:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:20.004 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:58:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:20.004 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:58:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:20.004 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:58:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:20.004 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:58:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:20.004 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:58:20 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 09:58:20.005 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 09:58:20 compute-0 nova_compute[189491]: 2025-12-01 09:58:20.484 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:58:23 compute-0 nova_compute[189491]: 2025-12-01 09:58:23.559 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:58:25 compute-0 nova_compute[189491]: 2025-12-01 09:58:25.487 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:58:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:58:26.552 106659 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:58:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:58:26.552 106659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:58:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:58:26.553 106659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:58:26 compute-0 podman[259780]: 2025-12-01 09:58:26.715877107 +0000 UTC m=+0.070437038 container health_status f2d8639519b361376780abb83cb5a5e341c4f412720fb5852b2a4d261a69c359 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, managed_by=edpm_ansible, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, release-0.7.12=, maintainer=Red Hat, Inc., vcs-type=git, name=ubi9, release=1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, container_name=kepler)
Dec  1 09:58:26 compute-0 podman[259778]: 2025-12-01 09:58:26.73480865 +0000 UTC m=+0.099412916 container health_status dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  1 09:58:26 compute-0 podman[259779]: 2025-12-01 09:58:26.74385493 +0000 UTC m=+0.104246623 container health_status e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=edpm)
Dec  1 09:58:28 compute-0 nova_compute[189491]: 2025-12-01 09:58:28.562 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:58:29 compute-0 podman[203700]: time="2025-12-01T09:58:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 09:58:29 compute-0 podman[203700]: @ - - [01/Dec/2025:09:58:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Dec  1 09:58:29 compute-0 podman[203700]: @ - - [01/Dec/2025:09:58:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4812 "" "Go-http-client/1.1"
Dec  1 09:58:30 compute-0 nova_compute[189491]: 2025-12-01 09:58:30.491 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:58:31 compute-0 openstack_network_exporter[205866]: ERROR   09:58:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:58:31 compute-0 openstack_network_exporter[205866]: ERROR   09:58:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:58:31 compute-0 openstack_network_exporter[205866]: ERROR   09:58:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 09:58:31 compute-0 openstack_network_exporter[205866]: ERROR   09:58:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 09:58:31 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:58:31 compute-0 openstack_network_exporter[205866]: ERROR   09:58:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 09:58:31 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:58:33 compute-0 nova_compute[189491]: 2025-12-01 09:58:33.563 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:58:33 compute-0 podman[259840]: 2025-12-01 09:58:33.694635258 +0000 UTC m=+0.070005408 container health_status 110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., vcs-type=git, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, io.buildah.version=1.33.7, release=1755695350, io.openshift.expose-services=)
Dec  1 09:58:33 compute-0 podman[259841]: 2025-12-01 09:58:33.722381015 +0000 UTC m=+0.098640396 container health_status f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  1 09:58:35 compute-0 nova_compute[189491]: 2025-12-01 09:58:35.496 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:58:37 compute-0 podman[259876]: 2025-12-01 09:58:37.728921258 +0000 UTC m=+0.094195599 container health_status 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible)
Dec  1 09:58:37 compute-0 podman[259877]: 2025-12-01 09:58:37.753005125 +0000 UTC m=+0.107372850 container health_status 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team)
Dec  1 09:58:38 compute-0 nova_compute[189491]: 2025-12-01 09:58:38.567 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:58:40 compute-0 nova_compute[189491]: 2025-12-01 09:58:40.501 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:58:43 compute-0 nova_compute[189491]: 2025-12-01 09:58:43.569 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:58:45 compute-0 nova_compute[189491]: 2025-12-01 09:58:45.504 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:58:47 compute-0 podman[259922]: 2025-12-01 09:58:47.698831068 +0000 UTC m=+0.075403160 container health_status 6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  1 09:58:47 compute-0 podman[259923]: 2025-12-01 09:58:47.71861783 +0000 UTC m=+0.094352482 container health_status ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0)
Dec  1 09:58:48 compute-0 nova_compute[189491]: 2025-12-01 09:58:48.570 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:58:48 compute-0 nova_compute[189491]: 2025-12-01 09:58:48.714 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:58:48 compute-0 nova_compute[189491]: 2025-12-01 09:58:48.715 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 09:58:48 compute-0 nova_compute[189491]: 2025-12-01 09:58:48.715 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 09:58:49 compute-0 nova_compute[189491]: 2025-12-01 09:58:49.535 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "refresh_cache-dc0d510c-4baf-4bcb-ab4f-de6ee48849c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 09:58:49 compute-0 nova_compute[189491]: 2025-12-01 09:58:49.536 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquired lock "refresh_cache-dc0d510c-4baf-4bcb-ab4f-de6ee48849c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 09:58:49 compute-0 nova_compute[189491]: 2025-12-01 09:58:49.536 189495 DEBUG nova.network.neutron [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] [instance: dc0d510c-4baf-4bcb-ab4f-de6ee48849c0] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  1 09:58:49 compute-0 nova_compute[189491]: 2025-12-01 09:58:49.537 189495 DEBUG nova.objects.instance [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lazy-loading 'info_cache' on Instance uuid dc0d510c-4baf-4bcb-ab4f-de6ee48849c0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 09:58:50 compute-0 nova_compute[189491]: 2025-12-01 09:58:50.510 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:58:51 compute-0 nova_compute[189491]: 2025-12-01 09:58:51.299 189495 DEBUG nova.network.neutron [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] [instance: dc0d510c-4baf-4bcb-ab4f-de6ee48849c0] Updating instance_info_cache with network_info: [{"id": "e1536dee-e9fa-499f-9e7a-2b2a0ecce586", "address": "fa:16:3e:50:a8:e2", "network": {"id": "cf0577af-a5ed-496f-aa24-ae4d86898e85", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.156", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d5294cc5ac64b22a4a0f770b8d8bc61", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1536dee-e9", "ovs_interfaceid": "e1536dee-e9fa-499f-9e7a-2b2a0ecce586", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 09:58:51 compute-0 nova_compute[189491]: 2025-12-01 09:58:51.385 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Releasing lock "refresh_cache-dc0d510c-4baf-4bcb-ab4f-de6ee48849c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 09:58:51 compute-0 nova_compute[189491]: 2025-12-01 09:58:51.386 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] [instance: dc0d510c-4baf-4bcb-ab4f-de6ee48849c0] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  1 09:58:53 compute-0 nova_compute[189491]: 2025-12-01 09:58:53.572 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:58:55 compute-0 nova_compute[189491]: 2025-12-01 09:58:55.514 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:58:57 compute-0 podman[259968]: 2025-12-01 09:58:57.697654901 +0000 UTC m=+0.069449116 container health_status f2d8639519b361376780abb83cb5a5e341c4f412720fb5852b2a4d261a69c359 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, io.openshift.expose-services=, vcs-type=git, version=9.4, io.openshift.tags=base rhel9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, managed_by=edpm_ansible)
Dec  1 09:58:57 compute-0 podman[259967]: 2025-12-01 09:58:57.70010505 +0000 UTC m=+0.074767345 container health_status e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec  1 09:58:57 compute-0 podman[259966]: 2025-12-01 09:58:57.702702213 +0000 UTC m=+0.076872986 container health_status dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  1 09:58:58 compute-0 nova_compute[189491]: 2025-12-01 09:58:58.574 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:58:58 compute-0 nova_compute[189491]: 2025-12-01 09:58:58.713 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:58:59 compute-0 podman[203700]: time="2025-12-01T09:58:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 09:58:59 compute-0 podman[203700]: @ - - [01/Dec/2025:09:58:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Dec  1 09:58:59 compute-0 podman[203700]: @ - - [01/Dec/2025:09:58:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4812 "" "Go-http-client/1.1"
Dec  1 09:59:00 compute-0 nova_compute[189491]: 2025-12-01 09:59:00.112 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:59:00 compute-0 nova_compute[189491]: 2025-12-01 09:59:00.133 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Triggering sync for uuid dc0d510c-4baf-4bcb-ab4f-de6ee48849c0 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Dec  1 09:59:00 compute-0 nova_compute[189491]: 2025-12-01 09:59:00.133 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Triggering sync for uuid be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Dec  1 09:59:00 compute-0 nova_compute[189491]: 2025-12-01 09:59:00.134 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "dc0d510c-4baf-4bcb-ab4f-de6ee48849c0" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:59:00 compute-0 nova_compute[189491]: 2025-12-01 09:59:00.134 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "dc0d510c-4baf-4bcb-ab4f-de6ee48849c0" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:59:00 compute-0 nova_compute[189491]: 2025-12-01 09:59:00.135 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:59:00 compute-0 nova_compute[189491]: 2025-12-01 09:59:00.135 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:59:00 compute-0 nova_compute[189491]: 2025-12-01 09:59:00.164 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "dc0d510c-4baf-4bcb-ab4f-de6ee48849c0" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.030s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:59:00 compute-0 nova_compute[189491]: 2025-12-01 09:59:00.166 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.030s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:59:00 compute-0 nova_compute[189491]: 2025-12-01 09:59:00.518 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:59:01 compute-0 openstack_network_exporter[205866]: ERROR   09:59:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 09:59:01 compute-0 openstack_network_exporter[205866]: ERROR   09:59:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:59:01 compute-0 openstack_network_exporter[205866]: ERROR   09:59:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:59:01 compute-0 openstack_network_exporter[205866]: ERROR   09:59:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 09:59:01 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:59:01 compute-0 openstack_network_exporter[205866]: ERROR   09:59:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 09:59:01 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:59:01 compute-0 nova_compute[189491]: 2025-12-01 09:59:01.732 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:59:02 compute-0 nova_compute[189491]: 2025-12-01 09:59:02.713 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:59:02 compute-0 nova_compute[189491]: 2025-12-01 09:59:02.746 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:59:02 compute-0 nova_compute[189491]: 2025-12-01 09:59:02.746 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:59:02 compute-0 nova_compute[189491]: 2025-12-01 09:59:02.746 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:59:02 compute-0 nova_compute[189491]: 2025-12-01 09:59:02.747 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 09:59:02 compute-0 nova_compute[189491]: 2025-12-01 09:59:02.871 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:59:02 compute-0 nova_compute[189491]: 2025-12-01 09:59:02.949 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:59:02 compute-0 nova_compute[189491]: 2025-12-01 09:59:02.951 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:59:03 compute-0 nova_compute[189491]: 2025-12-01 09:59:03.023 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2/disk --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:59:03 compute-0 nova_compute[189491]: 2025-12-01 09:59:03.034 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:59:03 compute-0 nova_compute[189491]: 2025-12-01 09:59:03.103 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:59:03 compute-0 nova_compute[189491]: 2025-12-01 09:59:03.104 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 09:59:03 compute-0 nova_compute[189491]: 2025-12-01 09:59:03.159 189495 DEBUG oslo_concurrency.processutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dc0d510c-4baf-4bcb-ab4f-de6ee48849c0/disk --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 09:59:03 compute-0 nova_compute[189491]: 2025-12-01 09:59:03.511 189495 WARNING nova.virt.libvirt.driver [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 09:59:03 compute-0 nova_compute[189491]: 2025-12-01 09:59:03.513 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4862MB free_disk=72.24858093261719GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 09:59:03 compute-0 nova_compute[189491]: 2025-12-01 09:59:03.513 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:59:03 compute-0 nova_compute[189491]: 2025-12-01 09:59:03.513 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:59:03 compute-0 nova_compute[189491]: 2025-12-01 09:59:03.576 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:59:03 compute-0 nova_compute[189491]: 2025-12-01 09:59:03.699 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Instance dc0d510c-4baf-4bcb-ab4f-de6ee48849c0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 09:59:03 compute-0 nova_compute[189491]: 2025-12-01 09:59:03.699 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Instance be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 09:59:03 compute-0 nova_compute[189491]: 2025-12-01 09:59:03.700 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 09:59:03 compute-0 nova_compute[189491]: 2025-12-01 09:59:03.700 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 09:59:03 compute-0 nova_compute[189491]: 2025-12-01 09:59:03.853 189495 DEBUG nova.compute.provider_tree [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Inventory has not changed in ProviderTree for provider: 143c7fe7-af1f-477a-978c-6a994d785d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 09:59:03 compute-0 nova_compute[189491]: 2025-12-01 09:59:03.867 189495 DEBUG nova.scheduler.client.report [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Inventory has not changed for provider 143c7fe7-af1f-477a-978c-6a994d785d98 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 09:59:03 compute-0 nova_compute[189491]: 2025-12-01 09:59:03.869 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 09:59:03 compute-0 nova_compute[189491]: 2025-12-01 09:59:03.869 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.356s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:59:04 compute-0 podman[260042]: 2025-12-01 09:59:04.686168545 +0000 UTC m=+0.065665832 container health_status 110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, io.openshift.expose-services=, distribution-scope=public, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, version=9.6, vcs-type=git)
Dec  1 09:59:04 compute-0 podman[260043]: 2025-12-01 09:59:04.700268609 +0000 UTC m=+0.073472592 container health_status f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec  1 09:59:04 compute-0 nova_compute[189491]: 2025-12-01 09:59:04.870 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:59:04 compute-0 nova_compute[189491]: 2025-12-01 09:59:04.870 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:59:04 compute-0 nova_compute[189491]: 2025-12-01 09:59:04.870 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:59:04 compute-0 nova_compute[189491]: 2025-12-01 09:59:04.871 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 09:59:05 compute-0 nova_compute[189491]: 2025-12-01 09:59:05.522 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:59:08 compute-0 nova_compute[189491]: 2025-12-01 09:59:08.579 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:59:08 compute-0 podman[260082]: 2025-12-01 09:59:08.686094342 +0000 UTC m=+0.066499853 container health_status 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 09:59:08 compute-0 nova_compute[189491]: 2025-12-01 09:59:08.715 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:59:08 compute-0 podman[260083]: 2025-12-01 09:59:08.75410535 +0000 UTC m=+0.130323269 container health_status 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 09:59:10 compute-0 nova_compute[189491]: 2025-12-01 09:59:10.525 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:59:11 compute-0 nova_compute[189491]: 2025-12-01 09:59:11.713 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:59:13 compute-0 nova_compute[189491]: 2025-12-01 09:59:13.581 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:59:15 compute-0 nova_compute[189491]: 2025-12-01 09:59:15.530 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:59:18 compute-0 nova_compute[189491]: 2025-12-01 09:59:18.583 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:59:18 compute-0 podman[260126]: 2025-12-01 09:59:18.6949011 +0000 UTC m=+0.057803851 container health_status 6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  1 09:59:18 compute-0 podman[260127]: 2025-12-01 09:59:18.739657591 +0000 UTC m=+0.099444416 container health_status ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec  1 09:59:20 compute-0 nova_compute[189491]: 2025-12-01 09:59:20.535 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:59:23 compute-0 nova_compute[189491]: 2025-12-01 09:59:23.585 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:59:25 compute-0 nova_compute[189491]: 2025-12-01 09:59:25.540 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:59:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:59:26.555 106659 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:59:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:59:26.556 106659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:59:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:59:26.556 106659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:59:28 compute-0 nova_compute[189491]: 2025-12-01 09:59:28.587 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:59:28 compute-0 podman[260169]: 2025-12-01 09:59:28.699174707 +0000 UTC m=+0.071226998 container health_status dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  1 09:59:28 compute-0 podman[260171]: 2025-12-01 09:59:28.719722589 +0000 UTC m=+0.082175646 container health_status f2d8639519b361376780abb83cb5a5e341c4f412720fb5852b2a4d261a69c359 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, managed_by=edpm_ansible, release-0.7.12=, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., io.openshift.tags=base rhel9, version=9.4, io.openshift.expose-services=, name=ubi9, architecture=x86_64, com.redhat.component=ubi9-container, vcs-type=git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, distribution-scope=public)
Dec  1 09:59:28 compute-0 podman[260170]: 2025-12-01 09:59:28.73823043 +0000 UTC m=+0.105382301 container health_status e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  1 09:59:29 compute-0 podman[203700]: time="2025-12-01T09:59:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 09:59:29 compute-0 podman[203700]: @ - - [01/Dec/2025:09:59:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Dec  1 09:59:29 compute-0 podman[203700]: @ - - [01/Dec/2025:09:59:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4814 "" "Go-http-client/1.1"
Dec  1 09:59:30 compute-0 nova_compute[189491]: 2025-12-01 09:59:30.543 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:59:31 compute-0 openstack_network_exporter[205866]: ERROR   09:59:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:59:31 compute-0 openstack_network_exporter[205866]: ERROR   09:59:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 09:59:31 compute-0 openstack_network_exporter[205866]: ERROR   09:59:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 09:59:31 compute-0 openstack_network_exporter[205866]: ERROR   09:59:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 09:59:31 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:59:31 compute-0 openstack_network_exporter[205866]: ERROR   09:59:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 09:59:31 compute-0 openstack_network_exporter[205866]: 
Dec  1 09:59:33 compute-0 nova_compute[189491]: 2025-12-01 09:59:33.589 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:59:35 compute-0 nova_compute[189491]: 2025-12-01 09:59:35.548 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:59:35 compute-0 podman[260232]: 2025-12-01 09:59:35.720767613 +0000 UTC m=+0.081465018 container health_status f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_managed=true)
Dec  1 09:59:35 compute-0 podman[260231]: 2025-12-01 09:59:35.731477534 +0000 UTC m=+0.094291330 container health_status 110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, managed_by=edpm_ansible, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, com.redhat.component=ubi9-minimal-container)
Dec  1 09:59:38 compute-0 nova_compute[189491]: 2025-12-01 09:59:38.591 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:59:39 compute-0 podman[260269]: 2025-12-01 09:59:39.696734598 +0000 UTC m=+0.075977444 container health_status 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  1 09:59:39 compute-0 podman[260270]: 2025-12-01 09:59:39.749447354 +0000 UTC m=+0.122992151 container health_status 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 09:59:40 compute-0 nova_compute[189491]: 2025-12-01 09:59:40.551 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:59:43 compute-0 nova_compute[189491]: 2025-12-01 09:59:43.593 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:59:45 compute-0 nova_compute[189491]: 2025-12-01 09:59:45.555 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:59:48 compute-0 nova_compute[189491]: 2025-12-01 09:59:48.596 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:59:49 compute-0 podman[260314]: 2025-12-01 09:59:49.707468446 +0000 UTC m=+0.079622773 container health_status ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec  1 09:59:49 compute-0 podman[260313]: 2025-12-01 09:59:49.718659839 +0000 UTC m=+0.091872972 container health_status 6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  1 09:59:50 compute-0 nova_compute[189491]: 2025-12-01 09:59:50.558 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:59:50 compute-0 nova_compute[189491]: 2025-12-01 09:59:50.714 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 09:59:50 compute-0 nova_compute[189491]: 2025-12-01 09:59:50.715 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 09:59:51 compute-0 nova_compute[189491]: 2025-12-01 09:59:51.248 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "refresh_cache-be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 09:59:51 compute-0 nova_compute[189491]: 2025-12-01 09:59:51.249 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquired lock "refresh_cache-be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 09:59:51 compute-0 nova_compute[189491]: 2025-12-01 09:59:51.249 189495 DEBUG nova.network.neutron [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] [instance: be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  1 09:59:53 compute-0 nova_compute[189491]: 2025-12-01 09:59:53.455 189495 DEBUG oslo_concurrency.lockutils [None req-5f4ecdaf-0d59-413f-a409-1a47909aa297 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Acquiring lock "dc0d510c-4baf-4bcb-ab4f-de6ee48849c0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:59:53 compute-0 nova_compute[189491]: 2025-12-01 09:59:53.456 189495 DEBUG oslo_concurrency.lockutils [None req-5f4ecdaf-0d59-413f-a409-1a47909aa297 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Lock "dc0d510c-4baf-4bcb-ab4f-de6ee48849c0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:59:53 compute-0 nova_compute[189491]: 2025-12-01 09:59:53.456 189495 DEBUG oslo_concurrency.lockutils [None req-5f4ecdaf-0d59-413f-a409-1a47909aa297 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Acquiring lock "dc0d510c-4baf-4bcb-ab4f-de6ee48849c0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:59:53 compute-0 nova_compute[189491]: 2025-12-01 09:59:53.457 189495 DEBUG oslo_concurrency.lockutils [None req-5f4ecdaf-0d59-413f-a409-1a47909aa297 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Lock "dc0d510c-4baf-4bcb-ab4f-de6ee48849c0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:59:53 compute-0 nova_compute[189491]: 2025-12-01 09:59:53.457 189495 DEBUG oslo_concurrency.lockutils [None req-5f4ecdaf-0d59-413f-a409-1a47909aa297 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Lock "dc0d510c-4baf-4bcb-ab4f-de6ee48849c0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:59:53 compute-0 nova_compute[189491]: 2025-12-01 09:59:53.458 189495 INFO nova.compute.manager [None req-5f4ecdaf-0d59-413f-a409-1a47909aa297 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] [instance: dc0d510c-4baf-4bcb-ab4f-de6ee48849c0] Terminating instance#033[00m
Dec  1 09:59:53 compute-0 nova_compute[189491]: 2025-12-01 09:59:53.459 189495 DEBUG nova.compute.manager [None req-5f4ecdaf-0d59-413f-a409-1a47909aa297 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] [instance: dc0d510c-4baf-4bcb-ab4f-de6ee48849c0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  1 09:59:53 compute-0 kernel: tape1536dee-e9 (unregistering): left promiscuous mode
Dec  1 09:59:53 compute-0 NetworkManager[56318]: <info>  [1764583193.4987] device (tape1536dee-e9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  1 09:59:53 compute-0 ovn_controller[97794]: 2025-12-01T09:59:53Z|00180|binding|INFO|Releasing lport e1536dee-e9fa-499f-9e7a-2b2a0ecce586 from this chassis (sb_readonly=0)
Dec  1 09:59:53 compute-0 ovn_controller[97794]: 2025-12-01T09:59:53Z|00181|binding|INFO|Setting lport e1536dee-e9fa-499f-9e7a-2b2a0ecce586 down in Southbound
Dec  1 09:59:53 compute-0 nova_compute[189491]: 2025-12-01 09:59:53.511 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:59:53 compute-0 ovn_controller[97794]: 2025-12-01T09:59:53Z|00182|binding|INFO|Removing iface tape1536dee-e9 ovn-installed in OVS
Dec  1 09:59:53 compute-0 nova_compute[189491]: 2025-12-01 09:59:53.513 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:59:53 compute-0 nova_compute[189491]: 2025-12-01 09:59:53.525 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:59:53 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:59:53.527 106659 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:50:a8:e2 10.100.0.156'], port_security=['fa:16:3e:50:a8:e2 10.100.0.156'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.156/16', 'neutron:device_id': 'dc0d510c-4baf-4bcb-ab4f-de6ee48849c0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cf0577af-a5ed-496f-aa24-ae4d86898e85', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6d5294cc5ac64b22a4a0f770b8d8bc61', 'neutron:revision_number': '4', 'neutron:security_group_ids': '43f98091-3f01-4ffd-9cb2-02d78ab9f60c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0c2dbc4a-f4e0-49c5-bb92-4872f344781e, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff7f12c3670>], logical_port=e1536dee-e9fa-499f-9e7a-2b2a0ecce586) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff7f12c3670>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 09:59:53 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:59:53.528 106659 INFO neutron.agent.ovn.metadata.agent [-] Port e1536dee-e9fa-499f-9e7a-2b2a0ecce586 in datapath cf0577af-a5ed-496f-aa24-ae4d86898e85 unbound from our chassis#033[00m
Dec  1 09:59:53 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:59:53.529 106659 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cf0577af-a5ed-496f-aa24-ae4d86898e85#033[00m
Dec  1 09:59:53 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:59:53.545 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[179a28b0-c4e2-4795-89f8-db51b2c15462]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:59:53 compute-0 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d0000000b.scope: Deactivated successfully.
Dec  1 09:59:53 compute-0 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d0000000b.scope: Consumed 7min 12.124s CPU time.
Dec  1 09:59:53 compute-0 systemd-machined[155812]: Machine qemu-12-instance-0000000b terminated.
Dec  1 09:59:53 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:59:53.576 239843 DEBUG oslo.privsep.daemon [-] privsep: reply[c42025fd-8b36-4d1b-ae33-45611a9abdb4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:59:53 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:59:53.579 239843 DEBUG oslo.privsep.daemon [-] privsep: reply[ec095c27-fda7-4bad-9c82-2dccf2e3fcdf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:59:53 compute-0 nova_compute[189491]: 2025-12-01 09:59:53.598 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:59:53 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:59:53.605 239843 DEBUG oslo.privsep.daemon [-] privsep: reply[e11e5be6-bb36-4727-9401-98ce06583cba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:59:53 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:59:53.621 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[b81da777-fa22-4b8f-9936-15d61f4875d4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcf0577af-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2f:ac:52'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 40, 'tx_packets': 7, 'rx_bytes': 1960, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 40, 'tx_packets': 7, 'rx_bytes': 1960, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 37], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 554552, 'reachable_time': 27139, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 260370, 'error': None, 'target': 'ovnmeta-cf0577af-a5ed-496f-aa24-ae4d86898e85', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:59:53 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:59:53.634 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[fb0e03e5-0840-4b8f-8e81-fed851601bc7]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapcf0577af-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 554566, 'tstamp': 554566}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 260371, 'error': None, 'target': 'ovnmeta-cf0577af-a5ed-496f-aa24-ae4d86898e85', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 16, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.255.255'], ['IFA_LABEL', 'tapcf0577af-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 554571, 'tstamp': 554571}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 260371, 'error': None, 'target': 'ovnmeta-cf0577af-a5ed-496f-aa24-ae4d86898e85', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 09:59:53 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:59:53.636 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcf0577af-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:59:53 compute-0 nova_compute[189491]: 2025-12-01 09:59:53.636 189495 DEBUG nova.network.neutron [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] [instance: be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2] Updating instance_info_cache with network_info: [{"id": "01cbdc1d-a86f-411f-a8e1-8a4166f063d3", "address": "fa:16:3e:37:35:95", "network": {"id": "cf0577af-a5ed-496f-aa24-ae4d86898e85", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.35", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d5294cc5ac64b22a4a0f770b8d8bc61", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01cbdc1d-a8", "ovs_interfaceid": "01cbdc1d-a86f-411f-a8e1-8a4166f063d3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 09:59:53 compute-0 nova_compute[189491]: 2025-12-01 09:59:53.639 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:59:53 compute-0 nova_compute[189491]: 2025-12-01 09:59:53.643 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:59:53 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:59:53.643 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcf0577af-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:59:53 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:59:53.644 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 09:59:53 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:59:53.644 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcf0577af-a0, col_values=(('external_ids', {'iface-id': '7159c06b-520e-4157-9235-0b4ddbac66cf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:59:53 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:59:53.644 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 09:59:53 compute-0 nova_compute[189491]: 2025-12-01 09:59:53.659 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Releasing lock "refresh_cache-be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 09:59:53 compute-0 nova_compute[189491]: 2025-12-01 09:59:53.660 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] [instance: be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  1 09:59:53 compute-0 nova_compute[189491]: 2025-12-01 09:59:53.739 189495 INFO nova.virt.libvirt.driver [-] [instance: dc0d510c-4baf-4bcb-ab4f-de6ee48849c0] Instance destroyed successfully.#033[00m
Dec  1 09:59:53 compute-0 nova_compute[189491]: 2025-12-01 09:59:53.739 189495 DEBUG nova.objects.instance [None req-5f4ecdaf-0d59-413f-a409-1a47909aa297 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Lazy-loading 'resources' on Instance uuid dc0d510c-4baf-4bcb-ab4f-de6ee48849c0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 09:59:53 compute-0 nova_compute[189491]: 2025-12-01 09:59:53.756 189495 DEBUG nova.virt.libvirt.vif [None req-5f4ecdaf-0d59-413f-a409-1a47909aa297 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-01T09:44:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='te-8664732-asg-zzzrimsgcaeu-gnecnnuukmep-lujrpewlzjs2',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-8664732-asg-zzzrimsgcaeu-gnecnnuukmep-lujrpewlzjs2',id=11,image_ref='280f4e4d-4a12-4164-a687-6106a9afc7fe',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-01T09:45:07Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='e03937ad-4d2d-4edc-9b33-ed8d878566ca'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6d5294cc5ac64b22a4a0f770b8d8bc61',ramdisk_id='',reservation_id='r-flgn0x2j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='280f4e4d-4a12-4164-a687-6106a9afc7fe',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-PrometheusGabbiTest-1348038279',owner_user_name='tempest-PrometheusGabbiTest-1348038279-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-01T09:45:07Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='c54f3a4a232b4a739be88e97f2094d4f',uuid=dc0d510c-4baf-4bcb-ab4f-de6ee48849c0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e1536dee-e9fa-499f-9e7a-2b2a0ecce586", "address": "fa:16:3e:50:a8:e2", "network": {"id": "cf0577af-a5ed-496f-aa24-ae4d86898e85", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.156", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d5294cc5ac64b22a4a0f770b8d8bc61", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1536dee-e9", "ovs_interfaceid": "e1536dee-e9fa-499f-9e7a-2b2a0ecce586", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  1 09:59:53 compute-0 nova_compute[189491]: 2025-12-01 09:59:53.756 189495 DEBUG nova.network.os_vif_util [None req-5f4ecdaf-0d59-413f-a409-1a47909aa297 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Converting VIF {"id": "e1536dee-e9fa-499f-9e7a-2b2a0ecce586", "address": "fa:16:3e:50:a8:e2", "network": {"id": "cf0577af-a5ed-496f-aa24-ae4d86898e85", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.156", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d5294cc5ac64b22a4a0f770b8d8bc61", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1536dee-e9", "ovs_interfaceid": "e1536dee-e9fa-499f-9e7a-2b2a0ecce586", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 09:59:53 compute-0 nova_compute[189491]: 2025-12-01 09:59:53.757 189495 DEBUG nova.network.os_vif_util [None req-5f4ecdaf-0d59-413f-a409-1a47909aa297 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:50:a8:e2,bridge_name='br-int',has_traffic_filtering=True,id=e1536dee-e9fa-499f-9e7a-2b2a0ecce586,network=Network(cf0577af-a5ed-496f-aa24-ae4d86898e85),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape1536dee-e9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 09:59:53 compute-0 nova_compute[189491]: 2025-12-01 09:59:53.758 189495 DEBUG os_vif [None req-5f4ecdaf-0d59-413f-a409-1a47909aa297 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:50:a8:e2,bridge_name='br-int',has_traffic_filtering=True,id=e1536dee-e9fa-499f-9e7a-2b2a0ecce586,network=Network(cf0577af-a5ed-496f-aa24-ae4d86898e85),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape1536dee-e9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  1 09:59:53 compute-0 nova_compute[189491]: 2025-12-01 09:59:53.759 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:59:53 compute-0 nova_compute[189491]: 2025-12-01 09:59:53.760 189495 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape1536dee-e9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:59:53 compute-0 nova_compute[189491]: 2025-12-01 09:59:53.761 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:59:53 compute-0 nova_compute[189491]: 2025-12-01 09:59:53.764 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  1 09:59:53 compute-0 nova_compute[189491]: 2025-12-01 09:59:53.765 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:59:53 compute-0 nova_compute[189491]: 2025-12-01 09:59:53.768 189495 DEBUG nova.compute.manager [req-54356ef2-ec29-469d-8ebd-78e4988ae079 req-e5859618-36e5-4324-8bb3-734b5eb6364d ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: dc0d510c-4baf-4bcb-ab4f-de6ee48849c0] Received event network-vif-unplugged-e1536dee-e9fa-499f-9e7a-2b2a0ecce586 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 09:59:53 compute-0 nova_compute[189491]: 2025-12-01 09:59:53.769 189495 DEBUG oslo_concurrency.lockutils [req-54356ef2-ec29-469d-8ebd-78e4988ae079 req-e5859618-36e5-4324-8bb3-734b5eb6364d ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Acquiring lock "dc0d510c-4baf-4bcb-ab4f-de6ee48849c0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:59:53 compute-0 nova_compute[189491]: 2025-12-01 09:59:53.769 189495 DEBUG oslo_concurrency.lockutils [req-54356ef2-ec29-469d-8ebd-78e4988ae079 req-e5859618-36e5-4324-8bb3-734b5eb6364d ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Lock "dc0d510c-4baf-4bcb-ab4f-de6ee48849c0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:59:53 compute-0 nova_compute[189491]: 2025-12-01 09:59:53.769 189495 DEBUG oslo_concurrency.lockutils [req-54356ef2-ec29-469d-8ebd-78e4988ae079 req-e5859618-36e5-4324-8bb3-734b5eb6364d ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Lock "dc0d510c-4baf-4bcb-ab4f-de6ee48849c0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:59:53 compute-0 nova_compute[189491]: 2025-12-01 09:59:53.769 189495 DEBUG nova.compute.manager [req-54356ef2-ec29-469d-8ebd-78e4988ae079 req-e5859618-36e5-4324-8bb3-734b5eb6364d ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: dc0d510c-4baf-4bcb-ab4f-de6ee48849c0] No waiting events found dispatching network-vif-unplugged-e1536dee-e9fa-499f-9e7a-2b2a0ecce586 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 09:59:53 compute-0 nova_compute[189491]: 2025-12-01 09:59:53.769 189495 DEBUG nova.compute.manager [req-54356ef2-ec29-469d-8ebd-78e4988ae079 req-e5859618-36e5-4324-8bb3-734b5eb6364d ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: dc0d510c-4baf-4bcb-ab4f-de6ee48849c0] Received event network-vif-unplugged-e1536dee-e9fa-499f-9e7a-2b2a0ecce586 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  1 09:59:53 compute-0 nova_compute[189491]: 2025-12-01 09:59:53.770 189495 INFO os_vif [None req-5f4ecdaf-0d59-413f-a409-1a47909aa297 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:50:a8:e2,bridge_name='br-int',has_traffic_filtering=True,id=e1536dee-e9fa-499f-9e7a-2b2a0ecce586,network=Network(cf0577af-a5ed-496f-aa24-ae4d86898e85),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape1536dee-e9')#033[00m
Dec  1 09:59:53 compute-0 nova_compute[189491]: 2025-12-01 09:59:53.770 189495 INFO nova.virt.libvirt.driver [None req-5f4ecdaf-0d59-413f-a409-1a47909aa297 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] [instance: dc0d510c-4baf-4bcb-ab4f-de6ee48849c0] Deleting instance files /var/lib/nova/instances/dc0d510c-4baf-4bcb-ab4f-de6ee48849c0_del#033[00m
Dec  1 09:59:53 compute-0 nova_compute[189491]: 2025-12-01 09:59:53.771 189495 INFO nova.virt.libvirt.driver [None req-5f4ecdaf-0d59-413f-a409-1a47909aa297 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] [instance: dc0d510c-4baf-4bcb-ab4f-de6ee48849c0] Deletion of /var/lib/nova/instances/dc0d510c-4baf-4bcb-ab4f-de6ee48849c0_del complete#033[00m
Dec  1 09:59:53 compute-0 nova_compute[189491]: 2025-12-01 09:59:53.852 189495 INFO nova.compute.manager [None req-5f4ecdaf-0d59-413f-a409-1a47909aa297 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] [instance: dc0d510c-4baf-4bcb-ab4f-de6ee48849c0] Took 0.39 seconds to destroy the instance on the hypervisor.#033[00m
Dec  1 09:59:53 compute-0 nova_compute[189491]: 2025-12-01 09:59:53.854 189495 DEBUG oslo.service.loopingcall [None req-5f4ecdaf-0d59-413f-a409-1a47909aa297 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  1 09:59:53 compute-0 nova_compute[189491]: 2025-12-01 09:59:53.854 189495 DEBUG nova.compute.manager [-] [instance: dc0d510c-4baf-4bcb-ab4f-de6ee48849c0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  1 09:59:53 compute-0 nova_compute[189491]: 2025-12-01 09:59:53.855 189495 DEBUG nova.network.neutron [-] [instance: dc0d510c-4baf-4bcb-ab4f-de6ee48849c0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  1 09:59:53 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:59:53.893 106659 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=17, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:2b:76', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'f6:fe:a3:90:0a:20'}, ipsec=False) old=SB_Global(nb_cfg=16) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 09:59:53 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:59:53.893 106659 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  1 09:59:53 compute-0 nova_compute[189491]: 2025-12-01 09:59:53.897 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:59:55 compute-0 nova_compute[189491]: 2025-12-01 09:59:55.140 189495 DEBUG nova.network.neutron [-] [instance: dc0d510c-4baf-4bcb-ab4f-de6ee48849c0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 09:59:55 compute-0 nova_compute[189491]: 2025-12-01 09:59:55.160 189495 INFO nova.compute.manager [-] [instance: dc0d510c-4baf-4bcb-ab4f-de6ee48849c0] Took 1.30 seconds to deallocate network for instance.#033[00m
Dec  1 09:59:55 compute-0 nova_compute[189491]: 2025-12-01 09:59:55.204 189495 DEBUG oslo_concurrency.lockutils [None req-5f4ecdaf-0d59-413f-a409-1a47909aa297 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:59:55 compute-0 nova_compute[189491]: 2025-12-01 09:59:55.205 189495 DEBUG oslo_concurrency.lockutils [None req-5f4ecdaf-0d59-413f-a409-1a47909aa297 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:59:55 compute-0 nova_compute[189491]: 2025-12-01 09:59:55.213 189495 DEBUG nova.compute.manager [req-3cd42580-821f-4677-a82d-7f8c75eb112b req-e09733bd-d195-45a6-9091-1d7be4b940f4 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: dc0d510c-4baf-4bcb-ab4f-de6ee48849c0] Received event network-vif-deleted-e1536dee-e9fa-499f-9e7a-2b2a0ecce586 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 09:59:55 compute-0 nova_compute[189491]: 2025-12-01 09:59:55.273 189495 DEBUG nova.compute.provider_tree [None req-5f4ecdaf-0d59-413f-a409-1a47909aa297 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Inventory has not changed in ProviderTree for provider: 143c7fe7-af1f-477a-978c-6a994d785d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 09:59:55 compute-0 nova_compute[189491]: 2025-12-01 09:59:55.290 189495 DEBUG nova.scheduler.client.report [None req-5f4ecdaf-0d59-413f-a409-1a47909aa297 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Inventory has not changed for provider 143c7fe7-af1f-477a-978c-6a994d785d98 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 09:59:55 compute-0 nova_compute[189491]: 2025-12-01 09:59:55.311 189495 DEBUG oslo_concurrency.lockutils [None req-5f4ecdaf-0d59-413f-a409-1a47909aa297 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.106s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:59:55 compute-0 nova_compute[189491]: 2025-12-01 09:59:55.350 189495 INFO nova.scheduler.client.report [None req-5f4ecdaf-0d59-413f-a409-1a47909aa297 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Deleted allocations for instance dc0d510c-4baf-4bcb-ab4f-de6ee48849c0#033[00m
Dec  1 09:59:55 compute-0 nova_compute[189491]: 2025-12-01 09:59:55.409 189495 DEBUG oslo_concurrency.lockutils [None req-5f4ecdaf-0d59-413f-a409-1a47909aa297 c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Lock "dc0d510c-4baf-4bcb-ab4f-de6ee48849c0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.953s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:59:55 compute-0 nova_compute[189491]: 2025-12-01 09:59:55.840 189495 DEBUG nova.compute.manager [req-9930c78e-c1d4-4ccb-a807-ba5ead116e5b req-d32e2f0e-baf6-4fd1-a852-192c458e6922 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: dc0d510c-4baf-4bcb-ab4f-de6ee48849c0] Received event network-vif-plugged-e1536dee-e9fa-499f-9e7a-2b2a0ecce586 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 09:59:55 compute-0 nova_compute[189491]: 2025-12-01 09:59:55.841 189495 DEBUG oslo_concurrency.lockutils [req-9930c78e-c1d4-4ccb-a807-ba5ead116e5b req-d32e2f0e-baf6-4fd1-a852-192c458e6922 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Acquiring lock "dc0d510c-4baf-4bcb-ab4f-de6ee48849c0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 09:59:55 compute-0 nova_compute[189491]: 2025-12-01 09:59:55.841 189495 DEBUG oslo_concurrency.lockutils [req-9930c78e-c1d4-4ccb-a807-ba5ead116e5b req-d32e2f0e-baf6-4fd1-a852-192c458e6922 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Lock "dc0d510c-4baf-4bcb-ab4f-de6ee48849c0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 09:59:55 compute-0 nova_compute[189491]: 2025-12-01 09:59:55.842 189495 DEBUG oslo_concurrency.lockutils [req-9930c78e-c1d4-4ccb-a807-ba5ead116e5b req-d32e2f0e-baf6-4fd1-a852-192c458e6922 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Lock "dc0d510c-4baf-4bcb-ab4f-de6ee48849c0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 09:59:55 compute-0 nova_compute[189491]: 2025-12-01 09:59:55.842 189495 DEBUG nova.compute.manager [req-9930c78e-c1d4-4ccb-a807-ba5ead116e5b req-d32e2f0e-baf6-4fd1-a852-192c458e6922 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: dc0d510c-4baf-4bcb-ab4f-de6ee48849c0] No waiting events found dispatching network-vif-plugged-e1536dee-e9fa-499f-9e7a-2b2a0ecce586 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 09:59:55 compute-0 nova_compute[189491]: 2025-12-01 09:59:55.842 189495 WARNING nova.compute.manager [req-9930c78e-c1d4-4ccb-a807-ba5ead116e5b req-d32e2f0e-baf6-4fd1-a852-192c458e6922 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: dc0d510c-4baf-4bcb-ab4f-de6ee48849c0] Received unexpected event network-vif-plugged-e1536dee-e9fa-499f-9e7a-2b2a0ecce586 for instance with vm_state deleted and task_state None.#033[00m
Dec  1 09:59:56 compute-0 ovn_metadata_agent[106654]: 2025-12-01 09:59:56.895 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=203a4433-d8f4-4d80-8084-548a6d57cd5d, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '17'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 09:59:58 compute-0 nova_compute[189491]: 2025-12-01 09:59:58.601 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:59:58 compute-0 nova_compute[189491]: 2025-12-01 09:59:58.763 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 09:59:59 compute-0 podman[260390]: 2025-12-01 09:59:59.693120672 +0000 UTC m=+0.066908363 container health_status dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  1 09:59:59 compute-0 podman[260391]: 2025-12-01 09:59:59.700596074 +0000 UTC m=+0.074491008 container health_status e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 09:59:59 compute-0 podman[260392]: 2025-12-01 09:59:59.728641688 +0000 UTC m=+0.097306504 container health_status f2d8639519b361376780abb83cb5a5e341c4f412720fb5852b2a4d261a69c359 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, config_id=edpm, release=1214.1726694543, release-0.7.12=, architecture=x86_64, version=9.4, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc.)
Dec  1 09:59:59 compute-0 podman[203700]: time="2025-12-01T09:59:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 09:59:59 compute-0 podman[203700]: @ - - [01/Dec/2025:09:59:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Dec  1 09:59:59 compute-0 podman[203700]: @ - - [01/Dec/2025:09:59:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4811 "" "Go-http-client/1.1"
Dec  1 10:00:00 compute-0 nova_compute[189491]: 2025-12-01 10:00:00.714 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 10:00:01 compute-0 openstack_network_exporter[205866]: ERROR   10:00:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 10:00:01 compute-0 openstack_network_exporter[205866]: ERROR   10:00:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 10:00:01 compute-0 openstack_network_exporter[205866]: 
Dec  1 10:00:01 compute-0 openstack_network_exporter[205866]: ERROR   10:00:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 10:00:01 compute-0 openstack_network_exporter[205866]: 
Dec  1 10:00:01 compute-0 openstack_network_exporter[205866]: ERROR   10:00:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 10:00:01 compute-0 openstack_network_exporter[205866]: ERROR   10:00:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 10:00:01 compute-0 nova_compute[189491]: 2025-12-01 10:00:01.947 189495 DEBUG oslo_concurrency.lockutils [None req-0e5a240e-9a49-40e7-b3e6-c8134b227f2a c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Acquiring lock "be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 10:00:01 compute-0 nova_compute[189491]: 2025-12-01 10:00:01.948 189495 DEBUG oslo_concurrency.lockutils [None req-0e5a240e-9a49-40e7-b3e6-c8134b227f2a c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Lock "be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 10:00:01 compute-0 nova_compute[189491]: 2025-12-01 10:00:01.948 189495 DEBUG oslo_concurrency.lockutils [None req-0e5a240e-9a49-40e7-b3e6-c8134b227f2a c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Acquiring lock "be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 10:00:01 compute-0 nova_compute[189491]: 2025-12-01 10:00:01.949 189495 DEBUG oslo_concurrency.lockutils [None req-0e5a240e-9a49-40e7-b3e6-c8134b227f2a c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Lock "be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 10:00:01 compute-0 nova_compute[189491]: 2025-12-01 10:00:01.949 189495 DEBUG oslo_concurrency.lockutils [None req-0e5a240e-9a49-40e7-b3e6-c8134b227f2a c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Lock "be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 10:00:01 compute-0 nova_compute[189491]: 2025-12-01 10:00:01.950 189495 INFO nova.compute.manager [None req-0e5a240e-9a49-40e7-b3e6-c8134b227f2a c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] [instance: be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2] Terminating instance#033[00m
Dec  1 10:00:01 compute-0 nova_compute[189491]: 2025-12-01 10:00:01.951 189495 DEBUG nova.compute.manager [None req-0e5a240e-9a49-40e7-b3e6-c8134b227f2a c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] [instance: be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  1 10:00:01 compute-0 kernel: tap01cbdc1d-a8 (unregistering): left promiscuous mode
Dec  1 10:00:01 compute-0 NetworkManager[56318]: <info>  [1764583201.9875] device (tap01cbdc1d-a8): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  1 10:00:01 compute-0 nova_compute[189491]: 2025-12-01 10:00:01.991 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 10:00:01 compute-0 ovn_controller[97794]: 2025-12-01T10:00:01Z|00183|binding|INFO|Releasing lport 01cbdc1d-a86f-411f-a8e1-8a4166f063d3 from this chassis (sb_readonly=0)
Dec  1 10:00:01 compute-0 ovn_controller[97794]: 2025-12-01T10:00:01Z|00184|binding|INFO|Setting lport 01cbdc1d-a86f-411f-a8e1-8a4166f063d3 down in Southbound
Dec  1 10:00:01 compute-0 ovn_controller[97794]: 2025-12-01T10:00:01Z|00185|binding|INFO|Removing iface tap01cbdc1d-a8 ovn-installed in OVS
Dec  1 10:00:01 compute-0 nova_compute[189491]: 2025-12-01 10:00:01.996 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 10:00:02 compute-0 ovn_metadata_agent[106654]: 2025-12-01 10:00:02.002 106659 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:37:35:95 10.100.3.35'], port_security=['fa:16:3e:37:35:95 10.100.3.35'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.3.35/16', 'neutron:device_id': 'be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cf0577af-a5ed-496f-aa24-ae4d86898e85', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6d5294cc5ac64b22a4a0f770b8d8bc61', 'neutron:revision_number': '4', 'neutron:security_group_ids': '43f98091-3f01-4ffd-9cb2-02d78ab9f60c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0c2dbc4a-f4e0-49c5-bb92-4872f344781e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff7f12c3670>], logical_port=01cbdc1d-a86f-411f-a8e1-8a4166f063d3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff7f12c3670>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 10:00:02 compute-0 ovn_metadata_agent[106654]: 2025-12-01 10:00:02.003 106659 INFO neutron.agent.ovn.metadata.agent [-] Port 01cbdc1d-a86f-411f-a8e1-8a4166f063d3 in datapath cf0577af-a5ed-496f-aa24-ae4d86898e85 unbound from our chassis#033[00m
Dec  1 10:00:02 compute-0 ovn_metadata_agent[106654]: 2025-12-01 10:00:02.004 106659 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network cf0577af-a5ed-496f-aa24-ae4d86898e85, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  1 10:00:02 compute-0 ovn_metadata_agent[106654]: 2025-12-01 10:00:02.005 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[e9b601eb-fd29-428b-8bd5-9214f7c7936f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 10:00:02 compute-0 ovn_metadata_agent[106654]: 2025-12-01 10:00:02.006 106659 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-cf0577af-a5ed-496f-aa24-ae4d86898e85 namespace which is not needed anymore#033[00m
Dec  1 10:00:02 compute-0 nova_compute[189491]: 2025-12-01 10:00:02.015 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 10:00:02 compute-0 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d0000000f.scope: Deactivated successfully.
Dec  1 10:00:02 compute-0 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d0000000f.scope: Consumed 6min 34.773s CPU time.
Dec  1 10:00:02 compute-0 systemd-machined[155812]: Machine qemu-16-instance-0000000f terminated.
Dec  1 10:00:02 compute-0 neutron-haproxy-ovnmeta-cf0577af-a5ed-496f-aa24-ae4d86898e85[254007]: [NOTICE]   (254011) : haproxy version is 2.8.14-c23fe91
Dec  1 10:00:02 compute-0 neutron-haproxy-ovnmeta-cf0577af-a5ed-496f-aa24-ae4d86898e85[254007]: [NOTICE]   (254011) : path to executable is /usr/sbin/haproxy
Dec  1 10:00:02 compute-0 neutron-haproxy-ovnmeta-cf0577af-a5ed-496f-aa24-ae4d86898e85[254007]: [WARNING]  (254011) : Exiting Master process...
Dec  1 10:00:02 compute-0 neutron-haproxy-ovnmeta-cf0577af-a5ed-496f-aa24-ae4d86898e85[254007]: [ALERT]    (254011) : Current worker (254013) exited with code 143 (Terminated)
Dec  1 10:00:02 compute-0 neutron-haproxy-ovnmeta-cf0577af-a5ed-496f-aa24-ae4d86898e85[254007]: [WARNING]  (254011) : All workers exited. Exiting... (0)
Dec  1 10:00:02 compute-0 systemd[1]: libpod-11aba77243e759c2d6c3e70732cd39540275449415fce36de1fa54533f0f4be1.scope: Deactivated successfully.
Dec  1 10:00:02 compute-0 podman[260473]: 2025-12-01 10:00:02.15533192 +0000 UTC m=+0.053923576 container died 11aba77243e759c2d6c3e70732cd39540275449415fce36de1fa54533f0f4be1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cf0577af-a5ed-496f-aa24-ae4d86898e85, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  1 10:00:02 compute-0 nova_compute[189491]: 2025-12-01 10:00:02.176 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 10:00:02 compute-0 nova_compute[189491]: 2025-12-01 10:00:02.186 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 10:00:02 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-11aba77243e759c2d6c3e70732cd39540275449415fce36de1fa54533f0f4be1-userdata-shm.mount: Deactivated successfully.
Dec  1 10:00:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-2a3150d317095ad740158ae4fa495bd79bb1451eaef937eaa43dfcd85db07375-merged.mount: Deactivated successfully.
Dec  1 10:00:02 compute-0 podman[260473]: 2025-12-01 10:00:02.220238623 +0000 UTC m=+0.118830259 container cleanup 11aba77243e759c2d6c3e70732cd39540275449415fce36de1fa54533f0f4be1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cf0577af-a5ed-496f-aa24-ae4d86898e85, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 10:00:02 compute-0 nova_compute[189491]: 2025-12-01 10:00:02.226 189495 INFO nova.virt.libvirt.driver [-] [instance: be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2] Instance destroyed successfully.#033[00m
Dec  1 10:00:02 compute-0 nova_compute[189491]: 2025-12-01 10:00:02.227 189495 DEBUG nova.objects.instance [None req-0e5a240e-9a49-40e7-b3e6-c8134b227f2a c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Lazy-loading 'resources' on Instance uuid be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 10:00:02 compute-0 systemd[1]: libpod-conmon-11aba77243e759c2d6c3e70732cd39540275449415fce36de1fa54533f0f4be1.scope: Deactivated successfully.
Dec  1 10:00:02 compute-0 nova_compute[189491]: 2025-12-01 10:00:02.248 189495 DEBUG nova.virt.libvirt.vif [None req-0e5a240e-9a49-40e7-b3e6-c8134b227f2a c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-01T09:49:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='te-8664732-asg-zzzrimsgcaeu-wsvolr2mhgm2-s6bg7htmycz5',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-8664732-asg-zzzrimsgcaeu-wsvolr2mhgm2-s6bg7htmycz5',id=15,image_ref='280f4e4d-4a12-4164-a687-6106a9afc7fe',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-01T09:50:07Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='e03937ad-4d2d-4edc-9b33-ed8d878566ca'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6d5294cc5ac64b22a4a0f770b8d8bc61',ramdisk_id='',reservation_id='r-nfp6qkos',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='280f4e4d-4a12-4164-a687-6106a9afc7fe',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-PrometheusGabbiTest-1348038279',owner_user_name='tempest-PrometheusGabbiTest-1348038279-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-01T09:50:07Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='c54f3a4a232b4a739be88e97f2094d4f',uuid=be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "01cbdc1d-a86f-411f-a8e1-8a4166f063d3", "address": "fa:16:3e:37:35:95", "network": {"id": "cf0577af-a5ed-496f-aa24-ae4d86898e85", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.35", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d5294cc5ac64b22a4a0f770b8d8bc61", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01cbdc1d-a8", "ovs_interfaceid": "01cbdc1d-a86f-411f-a8e1-8a4166f063d3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  1 10:00:02 compute-0 nova_compute[189491]: 2025-12-01 10:00:02.249 189495 DEBUG nova.network.os_vif_util [None req-0e5a240e-9a49-40e7-b3e6-c8134b227f2a c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Converting VIF {"id": "01cbdc1d-a86f-411f-a8e1-8a4166f063d3", "address": "fa:16:3e:37:35:95", "network": {"id": "cf0577af-a5ed-496f-aa24-ae4d86898e85", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.35", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d5294cc5ac64b22a4a0f770b8d8bc61", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01cbdc1d-a8", "ovs_interfaceid": "01cbdc1d-a86f-411f-a8e1-8a4166f063d3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 10:00:02 compute-0 nova_compute[189491]: 2025-12-01 10:00:02.249 189495 DEBUG nova.network.os_vif_util [None req-0e5a240e-9a49-40e7-b3e6-c8134b227f2a c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:37:35:95,bridge_name='br-int',has_traffic_filtering=True,id=01cbdc1d-a86f-411f-a8e1-8a4166f063d3,network=Network(cf0577af-a5ed-496f-aa24-ae4d86898e85),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap01cbdc1d-a8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 10:00:02 compute-0 nova_compute[189491]: 2025-12-01 10:00:02.250 189495 DEBUG os_vif [None req-0e5a240e-9a49-40e7-b3e6-c8134b227f2a c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:37:35:95,bridge_name='br-int',has_traffic_filtering=True,id=01cbdc1d-a86f-411f-a8e1-8a4166f063d3,network=Network(cf0577af-a5ed-496f-aa24-ae4d86898e85),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap01cbdc1d-a8') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  1 10:00:02 compute-0 nova_compute[189491]: 2025-12-01 10:00:02.251 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 10:00:02 compute-0 nova_compute[189491]: 2025-12-01 10:00:02.252 189495 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap01cbdc1d-a8, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 10:00:02 compute-0 nova_compute[189491]: 2025-12-01 10:00:02.255 189495 DEBUG nova.compute.manager [req-5c206ec6-dff8-4b08-afed-71f0f7f28f98 req-c0c70b23-c718-4998-879f-47769dec45a6 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2] Received event network-vif-unplugged-01cbdc1d-a86f-411f-a8e1-8a4166f063d3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 10:00:02 compute-0 nova_compute[189491]: 2025-12-01 10:00:02.256 189495 DEBUG oslo_concurrency.lockutils [req-5c206ec6-dff8-4b08-afed-71f0f7f28f98 req-c0c70b23-c718-4998-879f-47769dec45a6 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Acquiring lock "be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 10:00:02 compute-0 nova_compute[189491]: 2025-12-01 10:00:02.256 189495 DEBUG oslo_concurrency.lockutils [req-5c206ec6-dff8-4b08-afed-71f0f7f28f98 req-c0c70b23-c718-4998-879f-47769dec45a6 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Lock "be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 10:00:02 compute-0 nova_compute[189491]: 2025-12-01 10:00:02.257 189495 DEBUG oslo_concurrency.lockutils [req-5c206ec6-dff8-4b08-afed-71f0f7f28f98 req-c0c70b23-c718-4998-879f-47769dec45a6 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Lock "be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 10:00:02 compute-0 nova_compute[189491]: 2025-12-01 10:00:02.257 189495 DEBUG nova.compute.manager [req-5c206ec6-dff8-4b08-afed-71f0f7f28f98 req-c0c70b23-c718-4998-879f-47769dec45a6 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2] No waiting events found dispatching network-vif-unplugged-01cbdc1d-a86f-411f-a8e1-8a4166f063d3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 10:00:02 compute-0 nova_compute[189491]: 2025-12-01 10:00:02.258 189495 DEBUG nova.compute.manager [req-5c206ec6-dff8-4b08-afed-71f0f7f28f98 req-c0c70b23-c718-4998-879f-47769dec45a6 ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2] Received event network-vif-unplugged-01cbdc1d-a86f-411f-a8e1-8a4166f063d3 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  1 10:00:02 compute-0 nova_compute[189491]: 2025-12-01 10:00:02.258 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 10:00:02 compute-0 nova_compute[189491]: 2025-12-01 10:00:02.260 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  1 10:00:02 compute-0 nova_compute[189491]: 2025-12-01 10:00:02.262 189495 INFO os_vif [None req-0e5a240e-9a49-40e7-b3e6-c8134b227f2a c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:37:35:95,bridge_name='br-int',has_traffic_filtering=True,id=01cbdc1d-a86f-411f-a8e1-8a4166f063d3,network=Network(cf0577af-a5ed-496f-aa24-ae4d86898e85),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap01cbdc1d-a8')#033[00m
Dec  1 10:00:02 compute-0 nova_compute[189491]: 2025-12-01 10:00:02.263 189495 INFO nova.virt.libvirt.driver [None req-0e5a240e-9a49-40e7-b3e6-c8134b227f2a c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] [instance: be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2] Deleting instance files /var/lib/nova/instances/be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2_del#033[00m
Dec  1 10:00:02 compute-0 nova_compute[189491]: 2025-12-01 10:00:02.264 189495 INFO nova.virt.libvirt.driver [None req-0e5a240e-9a49-40e7-b3e6-c8134b227f2a c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] [instance: be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2] Deletion of /var/lib/nova/instances/be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2_del complete#033[00m
Dec  1 10:00:02 compute-0 podman[260518]: 2025-12-01 10:00:02.30133182 +0000 UTC m=+0.053261750 container remove 11aba77243e759c2d6c3e70732cd39540275449415fce36de1fa54533f0f4be1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cf0577af-a5ed-496f-aa24-ae4d86898e85, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  1 10:00:02 compute-0 ovn_metadata_agent[106654]: 2025-12-01 10:00:02.309 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[336867cf-89c7-4ae7-aa7c-8470101f369f]: (4, ('Mon Dec  1 10:00:02 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-cf0577af-a5ed-496f-aa24-ae4d86898e85 (11aba77243e759c2d6c3e70732cd39540275449415fce36de1fa54533f0f4be1)\n11aba77243e759c2d6c3e70732cd39540275449415fce36de1fa54533f0f4be1\nMon Dec  1 10:00:02 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-cf0577af-a5ed-496f-aa24-ae4d86898e85 (11aba77243e759c2d6c3e70732cd39540275449415fce36de1fa54533f0f4be1)\n11aba77243e759c2d6c3e70732cd39540275449415fce36de1fa54533f0f4be1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 10:00:02 compute-0 ovn_metadata_agent[106654]: 2025-12-01 10:00:02.311 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[94af5dab-3dba-4b0a-ba1b-5fa93045005b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 10:00:02 compute-0 ovn_metadata_agent[106654]: 2025-12-01 10:00:02.312 106659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcf0577af-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 10:00:02 compute-0 nova_compute[189491]: 2025-12-01 10:00:02.314 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 10:00:02 compute-0 kernel: tapcf0577af-a0: left promiscuous mode
Dec  1 10:00:02 compute-0 nova_compute[189491]: 2025-12-01 10:00:02.323 189495 INFO nova.compute.manager [None req-0e5a240e-9a49-40e7-b3e6-c8134b227f2a c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] [instance: be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2] Took 0.37 seconds to destroy the instance on the hypervisor.#033[00m
Dec  1 10:00:02 compute-0 nova_compute[189491]: 2025-12-01 10:00:02.324 189495 DEBUG oslo.service.loopingcall [None req-0e5a240e-9a49-40e7-b3e6-c8134b227f2a c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  1 10:00:02 compute-0 nova_compute[189491]: 2025-12-01 10:00:02.324 189495 DEBUG nova.compute.manager [-] [instance: be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  1 10:00:02 compute-0 nova_compute[189491]: 2025-12-01 10:00:02.325 189495 DEBUG nova.network.neutron [-] [instance: be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  1 10:00:02 compute-0 nova_compute[189491]: 2025-12-01 10:00:02.330 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 10:00:02 compute-0 ovn_metadata_agent[106654]: 2025-12-01 10:00:02.333 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[e66478fc-1b57-43c6-abd2-c8591a6d7c80]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 10:00:02 compute-0 ovn_metadata_agent[106654]: 2025-12-01 10:00:02.346 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[7f97ab7e-937e-46c2-9511-2345633b0bc3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 10:00:02 compute-0 ovn_metadata_agent[106654]: 2025-12-01 10:00:02.348 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[674bacf2-d5b4-46d1-86a7-852a5a230c94]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 10:00:02 compute-0 ovn_metadata_agent[106654]: 2025-12-01 10:00:02.364 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[09d5def0-3eed-44c1-ae30-b62014b197a4]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 554542, 'reachable_time': 39674, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 260532, 'error': None, 'target': 'ovnmeta-cf0577af-a5ed-496f-aa24-ae4d86898e85', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 10:00:02 compute-0 systemd[1]: run-netns-ovnmeta\x2dcf0577af\x2da5ed\x2d496f\x2daa24\x2dae4d86898e85.mount: Deactivated successfully.
Dec  1 10:00:02 compute-0 ovn_metadata_agent[106654]: 2025-12-01 10:00:02.368 106797 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-cf0577af-a5ed-496f-aa24-ae4d86898e85 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  1 10:00:02 compute-0 ovn_metadata_agent[106654]: 2025-12-01 10:00:02.368 106797 DEBUG oslo.privsep.daemon [-] privsep: reply[f18d5d0c-ec6d-4bc7-9684-f884c310fe4f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 10:00:02 compute-0 nova_compute[189491]: 2025-12-01 10:00:02.709 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 10:00:02 compute-0 nova_compute[189491]: 2025-12-01 10:00:02.713 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 10:00:02 compute-0 nova_compute[189491]: 2025-12-01 10:00:02.751 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 10:00:02 compute-0 nova_compute[189491]: 2025-12-01 10:00:02.752 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 10:00:02 compute-0 nova_compute[189491]: 2025-12-01 10:00:02.753 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 10:00:02 compute-0 nova_compute[189491]: 2025-12-01 10:00:02.753 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 10:00:03 compute-0 nova_compute[189491]: 2025-12-01 10:00:03.146 189495 WARNING nova.virt.libvirt.driver [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 10:00:03 compute-0 nova_compute[189491]: 2025-12-01 10:00:03.148 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5320MB free_disk=72.30591201782227GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 10:00:03 compute-0 nova_compute[189491]: 2025-12-01 10:00:03.149 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 10:00:03 compute-0 nova_compute[189491]: 2025-12-01 10:00:03.149 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 10:00:03 compute-0 nova_compute[189491]: 2025-12-01 10:00:03.240 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Instance be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 10:00:03 compute-0 nova_compute[189491]: 2025-12-01 10:00:03.241 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 10:00:03 compute-0 nova_compute[189491]: 2025-12-01 10:00:03.241 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=79GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 10:00:03 compute-0 nova_compute[189491]: 2025-12-01 10:00:03.289 189495 DEBUG nova.compute.provider_tree [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Inventory has not changed in ProviderTree for provider: 143c7fe7-af1f-477a-978c-6a994d785d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 10:00:03 compute-0 nova_compute[189491]: 2025-12-01 10:00:03.305 189495 DEBUG nova.scheduler.client.report [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Inventory has not changed for provider 143c7fe7-af1f-477a-978c-6a994d785d98 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 10:00:03 compute-0 nova_compute[189491]: 2025-12-01 10:00:03.332 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 10:00:03 compute-0 nova_compute[189491]: 2025-12-01 10:00:03.332 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.183s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 10:00:03 compute-0 nova_compute[189491]: 2025-12-01 10:00:03.603 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 10:00:03 compute-0 nova_compute[189491]: 2025-12-01 10:00:03.870 189495 DEBUG nova.network.neutron [-] [instance: be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 10:00:03 compute-0 nova_compute[189491]: 2025-12-01 10:00:03.888 189495 INFO nova.compute.manager [-] [instance: be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2] Took 1.56 seconds to deallocate network for instance.#033[00m
Dec  1 10:00:03 compute-0 nova_compute[189491]: 2025-12-01 10:00:03.929 189495 DEBUG oslo_concurrency.lockutils [None req-0e5a240e-9a49-40e7-b3e6-c8134b227f2a c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 10:00:03 compute-0 nova_compute[189491]: 2025-12-01 10:00:03.930 189495 DEBUG oslo_concurrency.lockutils [None req-0e5a240e-9a49-40e7-b3e6-c8134b227f2a c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 10:00:03 compute-0 nova_compute[189491]: 2025-12-01 10:00:03.974 189495 DEBUG nova.compute.provider_tree [None req-0e5a240e-9a49-40e7-b3e6-c8134b227f2a c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Inventory has not changed in ProviderTree for provider: 143c7fe7-af1f-477a-978c-6a994d785d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 10:00:03 compute-0 nova_compute[189491]: 2025-12-01 10:00:03.991 189495 DEBUG nova.scheduler.client.report [None req-0e5a240e-9a49-40e7-b3e6-c8134b227f2a c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Inventory has not changed for provider 143c7fe7-af1f-477a-978c-6a994d785d98 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 10:00:04 compute-0 nova_compute[189491]: 2025-12-01 10:00:04.012 189495 DEBUG oslo_concurrency.lockutils [None req-0e5a240e-9a49-40e7-b3e6-c8134b227f2a c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.082s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 10:00:04 compute-0 nova_compute[189491]: 2025-12-01 10:00:04.037 189495 INFO nova.scheduler.client.report [None req-0e5a240e-9a49-40e7-b3e6-c8134b227f2a c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Deleted allocations for instance be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2#033[00m
Dec  1 10:00:04 compute-0 nova_compute[189491]: 2025-12-01 10:00:04.106 189495 DEBUG oslo_concurrency.lockutils [None req-0e5a240e-9a49-40e7-b3e6-c8134b227f2a c54f3a4a232b4a739be88e97f2094d4f 6d5294cc5ac64b22a4a0f770b8d8bc61 - - default default] Lock "be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.158s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 10:00:04 compute-0 nova_compute[189491]: 2025-12-01 10:00:04.333 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 10:00:04 compute-0 nova_compute[189491]: 2025-12-01 10:00:04.343 189495 DEBUG nova.compute.manager [req-dd265545-e541-4583-bfce-91732221f948 req-fd72a66b-daca-4294-aa22-421c50f5310e ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2] Received event network-vif-plugged-01cbdc1d-a86f-411f-a8e1-8a4166f063d3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 10:00:04 compute-0 nova_compute[189491]: 2025-12-01 10:00:04.344 189495 DEBUG oslo_concurrency.lockutils [req-dd265545-e541-4583-bfce-91732221f948 req-fd72a66b-daca-4294-aa22-421c50f5310e ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Acquiring lock "be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 10:00:04 compute-0 nova_compute[189491]: 2025-12-01 10:00:04.344 189495 DEBUG oslo_concurrency.lockutils [req-dd265545-e541-4583-bfce-91732221f948 req-fd72a66b-daca-4294-aa22-421c50f5310e ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Lock "be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 10:00:04 compute-0 nova_compute[189491]: 2025-12-01 10:00:04.345 189495 DEBUG oslo_concurrency.lockutils [req-dd265545-e541-4583-bfce-91732221f948 req-fd72a66b-daca-4294-aa22-421c50f5310e ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] Lock "be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 10:00:04 compute-0 nova_compute[189491]: 2025-12-01 10:00:04.345 189495 DEBUG nova.compute.manager [req-dd265545-e541-4583-bfce-91732221f948 req-fd72a66b-daca-4294-aa22-421c50f5310e ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2] No waiting events found dispatching network-vif-plugged-01cbdc1d-a86f-411f-a8e1-8a4166f063d3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 10:00:04 compute-0 nova_compute[189491]: 2025-12-01 10:00:04.345 189495 WARNING nova.compute.manager [req-dd265545-e541-4583-bfce-91732221f948 req-fd72a66b-daca-4294-aa22-421c50f5310e ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2] Received unexpected event network-vif-plugged-01cbdc1d-a86f-411f-a8e1-8a4166f063d3 for instance with vm_state deleted and task_state None.#033[00m
Dec  1 10:00:04 compute-0 nova_compute[189491]: 2025-12-01 10:00:04.346 189495 DEBUG nova.compute.manager [req-dd265545-e541-4583-bfce-91732221f948 req-fd72a66b-daca-4294-aa22-421c50f5310e ca0ab5339610464dbf100db912d81e01 fa2c40fc5d2f4460ba58fc7f9fc41a05 - - default default] [instance: be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2] Received event network-vif-deleted-01cbdc1d-a86f-411f-a8e1-8a4166f063d3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 10:00:04 compute-0 nova_compute[189491]: 2025-12-01 10:00:04.713 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 10:00:05 compute-0 nova_compute[189491]: 2025-12-01 10:00:05.713 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 10:00:05 compute-0 nova_compute[189491]: 2025-12-01 10:00:05.714 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 10:00:06 compute-0 podman[260539]: 2025-12-01 10:00:06.686299283 +0000 UTC m=+0.063047969 container health_status f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent)
Dec  1 10:00:06 compute-0 podman[260538]: 2025-12-01 10:00:06.721071441 +0000 UTC m=+0.098993005 container health_status 110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  1 10:00:07 compute-0 nova_compute[189491]: 2025-12-01 10:00:07.255 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 10:00:08 compute-0 nova_compute[189491]: 2025-12-01 10:00:08.606 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 10:00:08 compute-0 nova_compute[189491]: 2025-12-01 10:00:08.736 189495 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764583193.7355077, dc0d510c-4baf-4bcb-ab4f-de6ee48849c0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 10:00:08 compute-0 nova_compute[189491]: 2025-12-01 10:00:08.737 189495 INFO nova.compute.manager [-] [instance: dc0d510c-4baf-4bcb-ab4f-de6ee48849c0] VM Stopped (Lifecycle Event)#033[00m
Dec  1 10:00:08 compute-0 nova_compute[189491]: 2025-12-01 10:00:08.758 189495 DEBUG nova.compute.manager [None req-4d3779d9-92a1-4095-8088-7b2d34382148 - - - - - -] [instance: dc0d510c-4baf-4bcb-ab4f-de6ee48849c0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 10:00:09 compute-0 nova_compute[189491]: 2025-12-01 10:00:09.715 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 10:00:10 compute-0 podman[260576]: 2025-12-01 10:00:10.72130041 +0000 UTC m=+0.100084562 container health_status 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  1 10:00:10 compute-0 podman[260577]: 2025-12-01 10:00:10.730177656 +0000 UTC m=+0.105076044 container health_status 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec  1 10:00:11 compute-0 nova_compute[189491]: 2025-12-01 10:00:11.715 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 10:00:12 compute-0 nova_compute[189491]: 2025-12-01 10:00:12.260 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 10:00:13 compute-0 nova_compute[189491]: 2025-12-01 10:00:13.607 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 10:00:17 compute-0 nova_compute[189491]: 2025-12-01 10:00:17.223 189495 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764583202.222804, be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 10:00:17 compute-0 nova_compute[189491]: 2025-12-01 10:00:17.224 189495 INFO nova.compute.manager [-] [instance: be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2] VM Stopped (Lifecycle Event)#033[00m
Dec  1 10:00:17 compute-0 nova_compute[189491]: 2025-12-01 10:00:17.245 189495 DEBUG nova.compute.manager [None req-58013756-b099-4597-9150-f923f9645306 - - - - - -] [instance: be4cb8ff-ec1f-4d01-90b8-a93513c4a4a2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 10:00:17 compute-0 nova_compute[189491]: 2025-12-01 10:00:17.262 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 10:00:17 compute-0 nova_compute[189491]: 2025-12-01 10:00:17.709 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 10:00:18 compute-0 nova_compute[189491]: 2025-12-01 10:00:18.608 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 10:00:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 10:00:19.799 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  1 10:00:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 10:00:19.799 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  1 10:00:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 10:00:19.799 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b020>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84fb376e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 10:00:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 10:00:19.800 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7ff84c98b0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 10:00:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 10:00:19.801 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84dc55100>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84fb376e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 10:00:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 10:00:19.801 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84fb376e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 10:00:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 10:00:19.801 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84fb376e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 10:00:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 10:00:19.801 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84fb376e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 10:00:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 10:00:19.801 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84ca1c260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84fb376e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 10:00:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 10:00:19.801 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84fb376e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 10:00:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 10:00:19.801 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84fb376e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 10:00:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 10:00:19.801 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84ff01af0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84fb376e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 10:00:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 10:00:19.802 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bb30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84fb376e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 10:00:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 10:00:19.802 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84fb376e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 10:00:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 10:00:19.802 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bb60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84fb376e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 10:00:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 10:00:19.802 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84fb376e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 10:00:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 10:00:19.802 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bbc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84fb376e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 10:00:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 10:00:19.802 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84fb376e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 10:00:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 10:00:19.802 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 10:00:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 10:00:19.803 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7ff8501e1d00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 10:00:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 10:00:19.802 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84fb376e0>] with cache [{}], pollster history [{'disk.device.read.bytes': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 10:00:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 10:00:19.803 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84fb376e0>] with cache [{}], pollster history [{'disk.device.read.bytes': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 10:00:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 10:00:19.803 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84fb376e0>] with cache [{}], pollster history [{'disk.device.read.bytes': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 10:00:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 10:00:19.804 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b5f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84fb376e0>] with cache [{}], pollster history [{'disk.device.read.bytes': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 10:00:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 10:00:19.804 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98b650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84fb376e0>] with cache [{}], pollster history [{'disk.device.read.bytes': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 10:00:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 10:00:19.804 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98be60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84fb376e0>] with cache [{}], pollster history [{'disk.device.read.bytes': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 10:00:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 10:00:19.804 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84f216690>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84fb376e0>] with cache [{}], pollster history [{'disk.device.read.bytes': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 10:00:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 10:00:19.804 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c9896d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84fb376e0>] with cache [{}], pollster history [{'disk.device.read.bytes': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 10:00:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 10:00:19.804 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84fb376e0>] with cache [{}], pollster history [{'disk.device.read.bytes': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 10:00:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 10:00:19.804 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98a720>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84fb376e0>] with cache [{}], pollster history [{'disk.device.read.bytes': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 10:00:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 10:00:19.804 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7ff84c98bf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7ff84fb376e0>] with cache [{}], pollster history [{'disk.device.read.bytes': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 10:00:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 10:00:19.803 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 10:00:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 10:00:19.805 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7ff84c98b110>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 10:00:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 10:00:19.805 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 10:00:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 10:00:19.805 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7ff84c98b1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 10:00:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 10:00:19.805 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 10:00:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 10:00:19.805 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7ff84c98b200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 10:00:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 10:00:19.805 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 10:00:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 10:00:19.805 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7ff84ca1c230>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 10:00:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 10:00:19.806 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 10:00:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 10:00:19.806 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7ff84c98b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 10:00:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 10:00:19.806 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 10:00:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 10:00:19.806 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7ff84c98b2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 10:00:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 10:00:19.806 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 10:00:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 10:00:19.806 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7ff84c98b620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 10:00:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 10:00:19.806 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 10:00:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 10:00:19.806 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7ff84c98b680>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 10:00:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 10:00:19.806 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 10:00:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 10:00:19.807 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7ff84c98b320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 10:00:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 10:00:19.807 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 10:00:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 10:00:19.807 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7ff84c98b920>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 10:00:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 10:00:19.807 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 10:00:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 10:00:19.807 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7ff84c98b380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 10:00:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 10:00:19.807 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 10:00:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 10:00:19.807 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7ff84c98bb90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 10:00:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 10:00:19.807 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 10:00:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 10:00:19.807 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7ff84c98bbf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 10:00:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 10:00:19.807 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 10:00:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 10:00:19.808 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7ff84c98bc80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 10:00:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 10:00:19.808 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 10:00:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 10:00:19.808 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7ff84c98bd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 10:00:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 10:00:19.808 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 10:00:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 10:00:19.808 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7ff84c98bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 10:00:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 10:00:19.808 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 10:00:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 10:00:19.808 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7ff84c98b5c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 10:00:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 10:00:19.808 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 10:00:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 10:00:19.808 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7ff84dc55040>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 10:00:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 10:00:19.808 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 10:00:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 10:00:19.809 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7ff84c98be30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 10:00:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 10:00:19.809 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 10:00:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 10:00:19.809 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7ff8503b1b80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 10:00:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 10:00:19.809 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 10:00:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 10:00:19.809 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7ff84dab3f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 10:00:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 10:00:19.809 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 10:00:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 10:00:19.809 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7ff84c98bec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 10:00:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 10:00:19.809 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 10:00:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 10:00:19.809 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7ff84c98b170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 10:00:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 10:00:19.809 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 10:00:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 10:00:19.809 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7ff84c98bf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7ff84db3ba10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 10:00:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 10:00:19.810 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 10:00:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 10:00:19.810 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 10:00:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 10:00:19.810 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 10:00:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 10:00:19.810 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 10:00:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 10:00:19.810 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 10:00:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 10:00:19.810 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 10:00:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 10:00:19.810 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 10:00:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 10:00:19.810 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 10:00:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 10:00:19.811 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 10:00:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 10:00:19.811 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 10:00:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 10:00:19.811 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 10:00:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 10:00:19.811 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 10:00:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 10:00:19.811 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 10:00:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 10:00:19.811 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 10:00:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 10:00:19.811 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 10:00:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 10:00:19.811 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 10:00:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 10:00:19.811 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 10:00:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 10:00:19.811 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 10:00:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 10:00:19.811 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 10:00:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 10:00:19.811 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 10:00:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 10:00:19.811 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 10:00:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 10:00:19.812 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 10:00:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 10:00:19.812 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 10:00:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 10:00:19.812 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 10:00:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 10:00:19.812 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 10:00:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 10:00:19.812 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 10:00:19 compute-0 ceilometer_agent_compute[200222]: 2025-12-01 10:00:19.812 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 10:00:20 compute-0 podman[260624]: 2025-12-01 10:00:20.68750658 +0000 UTC m=+0.065209791 container health_status 6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  1 10:00:20 compute-0 podman[260625]: 2025-12-01 10:00:20.695753291 +0000 UTC m=+0.071806282 container health_status ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  1 10:00:22 compute-0 nova_compute[189491]: 2025-12-01 10:00:22.265 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 10:00:23 compute-0 nova_compute[189491]: 2025-12-01 10:00:23.610 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 10:00:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 10:00:26.557 106659 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 10:00:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 10:00:26.558 106659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 10:00:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 10:00:26.558 106659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 10:00:27 compute-0 nova_compute[189491]: 2025-12-01 10:00:27.269 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 10:00:28 compute-0 nova_compute[189491]: 2025-12-01 10:00:28.612 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 10:00:29 compute-0 podman[203700]: time="2025-12-01T10:00:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 10:00:29 compute-0 podman[203700]: @ - - [01/Dec/2025:10:00:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28292 "" "Go-http-client/1.1"
Dec  1 10:00:29 compute-0 podman[203700]: @ - - [01/Dec/2025:10:00:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4340 "" "Go-http-client/1.1"
Dec  1 10:00:30 compute-0 podman[260663]: 2025-12-01 10:00:30.687400582 +0000 UTC m=+0.061455490 container health_status dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  1 10:00:30 compute-0 podman[260664]: 2025-12-01 10:00:30.708670341 +0000 UTC m=+0.078664750 container health_status e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec  1 10:00:30 compute-0 podman[260665]: 2025-12-01 10:00:30.710223238 +0000 UTC m=+0.076519646 container health_status f2d8639519b361376780abb83cb5a5e341c4f412720fb5852b2a4d261a69c359 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, build-date=2024-09-18T21:23:30, release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.expose-services=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, architecture=x86_64, container_name=kepler, distribution-scope=public, managed_by=edpm_ansible, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc.)
Dec  1 10:00:31 compute-0 openstack_network_exporter[205866]: ERROR   10:00:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 10:00:31 compute-0 openstack_network_exporter[205866]: ERROR   10:00:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 10:00:31 compute-0 openstack_network_exporter[205866]: ERROR   10:00:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 10:00:31 compute-0 openstack_network_exporter[205866]: ERROR   10:00:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 10:00:31 compute-0 openstack_network_exporter[205866]: 
Dec  1 10:00:31 compute-0 openstack_network_exporter[205866]: ERROR   10:00:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 10:00:31 compute-0 openstack_network_exporter[205866]: 
Dec  1 10:00:32 compute-0 nova_compute[189491]: 2025-12-01 10:00:32.272 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 10:00:33 compute-0 nova_compute[189491]: 2025-12-01 10:00:33.614 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 10:00:33 compute-0 ovn_controller[97794]: 2025-12-01T10:00:33Z|00186|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory
Dec  1 10:00:37 compute-0 nova_compute[189491]: 2025-12-01 10:00:37.276 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 10:00:37 compute-0 podman[260725]: 2025-12-01 10:00:37.716720147 +0000 UTC m=+0.080018454 container health_status 110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., vcs-type=git, com.redhat.component=ubi9-minimal-container, distribution-scope=public, release=1755695350, version=9.6, config_id=edpm, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41)
Dec  1 10:00:37 compute-0 podman[260726]: 2025-12-01 10:00:37.729911768 +0000 UTC m=+0.097987891 container health_status f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Dec  1 10:00:38 compute-0 nova_compute[189491]: 2025-12-01 10:00:38.342 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 10:00:38 compute-0 nova_compute[189491]: 2025-12-01 10:00:38.616 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 10:00:41 compute-0 podman[260764]: 2025-12-01 10:00:41.701277794 +0000 UTC m=+0.080431812 container health_status 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec  1 10:00:41 compute-0 podman[260765]: 2025-12-01 10:00:41.787803304 +0000 UTC m=+0.157243555 container health_status 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller)
Dec  1 10:00:42 compute-0 nova_compute[189491]: 2025-12-01 10:00:42.278 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 10:00:43 compute-0 nova_compute[189491]: 2025-12-01 10:00:43.617 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 10:00:47 compute-0 nova_compute[189491]: 2025-12-01 10:00:47.282 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 10:00:48 compute-0 nova_compute[189491]: 2025-12-01 10:00:48.619 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 10:00:51 compute-0 podman[260809]: 2025-12-01 10:00:51.68769142 +0000 UTC m=+0.063895690 container health_status 6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  1 10:00:51 compute-0 podman[260810]: 2025-12-01 10:00:51.714511143 +0000 UTC m=+0.091418230 container health_status ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  1 10:00:52 compute-0 nova_compute[189491]: 2025-12-01 10:00:52.285 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 10:00:52 compute-0 nova_compute[189491]: 2025-12-01 10:00:52.713 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 10:00:52 compute-0 nova_compute[189491]: 2025-12-01 10:00:52.714 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 10:00:52 compute-0 nova_compute[189491]: 2025-12-01 10:00:52.714 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 10:00:52 compute-0 nova_compute[189491]: 2025-12-01 10:00:52.766 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  1 10:00:53 compute-0 nova_compute[189491]: 2025-12-01 10:00:53.623 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 10:00:57 compute-0 nova_compute[189491]: 2025-12-01 10:00:57.291 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 10:00:58 compute-0 nova_compute[189491]: 2025-12-01 10:00:58.628 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 10:00:59 compute-0 podman[203700]: time="2025-12-01T10:00:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 10:00:59 compute-0 podman[203700]: @ - - [01/Dec/2025:10:00:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28292 "" "Go-http-client/1.1"
Dec  1 10:00:59 compute-0 podman[203700]: @ - - [01/Dec/2025:10:00:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4345 "" "Go-http-client/1.1"
Dec  1 10:01:01 compute-0 openstack_network_exporter[205866]: ERROR   10:01:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 10:01:01 compute-0 openstack_network_exporter[205866]: ERROR   10:01:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 10:01:01 compute-0 openstack_network_exporter[205866]: ERROR   10:01:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 10:01:01 compute-0 openstack_network_exporter[205866]: ERROR   10:01:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 10:01:01 compute-0 openstack_network_exporter[205866]: 
Dec  1 10:01:01 compute-0 openstack_network_exporter[205866]: ERROR   10:01:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 10:01:01 compute-0 openstack_network_exporter[205866]: 
Dec  1 10:01:01 compute-0 podman[260851]: 2025-12-01 10:01:01.70639869 +0000 UTC m=+0.082002521 container health_status e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 10:01:01 compute-0 podman[260850]: 2025-12-01 10:01:01.721381405 +0000 UTC m=+0.101197718 container health_status dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  1 10:01:01 compute-0 podman[260852]: 2025-12-01 10:01:01.740712907 +0000 UTC m=+0.110311251 container health_status f2d8639519b361376780abb83cb5a5e341c4f412720fb5852b2a4d261a69c359 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, config_id=edpm, com.redhat.component=ubi9-container, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, vendor=Red Hat, Inc., io.buildah.version=1.29.0, release=1214.1726694543, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release-0.7.12=, io.openshift.tags=base rhel9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  1 10:01:02 compute-0 nova_compute[189491]: 2025-12-01 10:01:02.297 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 10:01:02 compute-0 nova_compute[189491]: 2025-12-01 10:01:02.715 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 10:01:03 compute-0 nova_compute[189491]: 2025-12-01 10:01:03.630 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 10:01:04 compute-0 nova_compute[189491]: 2025-12-01 10:01:04.709 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 10:01:04 compute-0 nova_compute[189491]: 2025-12-01 10:01:04.713 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 10:01:04 compute-0 nova_compute[189491]: 2025-12-01 10:01:04.714 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 10:01:04 compute-0 nova_compute[189491]: 2025-12-01 10:01:04.714 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 10:01:04 compute-0 nova_compute[189491]: 2025-12-01 10:01:04.754 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 10:01:04 compute-0 nova_compute[189491]: 2025-12-01 10:01:04.755 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 10:01:04 compute-0 nova_compute[189491]: 2025-12-01 10:01:04.755 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 10:01:04 compute-0 nova_compute[189491]: 2025-12-01 10:01:04.756 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 10:01:05 compute-0 nova_compute[189491]: 2025-12-01 10:01:05.135 189495 WARNING nova.virt.libvirt.driver [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 10:01:05 compute-0 nova_compute[189491]: 2025-12-01 10:01:05.137 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5322MB free_disk=72.30591201782227GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 10:01:05 compute-0 nova_compute[189491]: 2025-12-01 10:01:05.137 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 10:01:05 compute-0 nova_compute[189491]: 2025-12-01 10:01:05.138 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 10:01:05 compute-0 nova_compute[189491]: 2025-12-01 10:01:05.199 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 10:01:05 compute-0 nova_compute[189491]: 2025-12-01 10:01:05.199 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 10:01:05 compute-0 nova_compute[189491]: 2025-12-01 10:01:05.226 189495 DEBUG nova.compute.provider_tree [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Inventory has not changed in ProviderTree for provider: 143c7fe7-af1f-477a-978c-6a994d785d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 10:01:05 compute-0 nova_compute[189491]: 2025-12-01 10:01:05.240 189495 DEBUG nova.scheduler.client.report [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Inventory has not changed for provider 143c7fe7-af1f-477a-978c-6a994d785d98 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 10:01:05 compute-0 nova_compute[189491]: 2025-12-01 10:01:05.261 189495 DEBUG nova.compute.resource_tracker [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 10:01:05 compute-0 nova_compute[189491]: 2025-12-01 10:01:05.261 189495 DEBUG oslo_concurrency.lockutils [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.123s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 10:01:06 compute-0 nova_compute[189491]: 2025-12-01 10:01:06.262 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 10:01:06 compute-0 nova_compute[189491]: 2025-12-01 10:01:06.263 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 10:01:07 compute-0 nova_compute[189491]: 2025-12-01 10:01:07.302 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 10:01:08 compute-0 nova_compute[189491]: 2025-12-01 10:01:08.631 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 10:01:08 compute-0 podman[260919]: 2025-12-01 10:01:08.699218203 +0000 UTC m=+0.072478529 container health_status 110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, container_name=openstack_network_exporter, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., version=9.6, managed_by=edpm_ansible, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, name=ubi9-minimal, vcs-type=git, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Dec  1 10:01:08 compute-0 podman[260920]: 2025-12-01 10:01:08.704276555 +0000 UTC m=+0.069995608 container health_status f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  1 10:01:10 compute-0 nova_compute[189491]: 2025-12-01 10:01:10.715 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 10:01:12 compute-0 nova_compute[189491]: 2025-12-01 10:01:12.305 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 10:01:12 compute-0 podman[260959]: 2025-12-01 10:01:12.716533654 +0000 UTC m=+0.086880464 container health_status 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  1 10:01:12 compute-0 podman[260960]: 2025-12-01 10:01:12.745259623 +0000 UTC m=+0.114159357 container health_status 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller)
Dec  1 10:01:13 compute-0 nova_compute[189491]: 2025-12-01 10:01:13.634 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 10:01:13 compute-0 nova_compute[189491]: 2025-12-01 10:01:13.713 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 10:01:14 compute-0 ovn_controller[97794]: 2025-12-01T10:01:14Z|00187|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory
Dec  1 10:01:17 compute-0 nova_compute[189491]: 2025-12-01 10:01:17.309 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 10:01:18 compute-0 nova_compute[189491]: 2025-12-01 10:01:18.636 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 10:01:22 compute-0 nova_compute[189491]: 2025-12-01 10:01:22.313 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 10:01:22 compute-0 podman[261003]: 2025-12-01 10:01:22.708824359 +0000 UTC m=+0.074932909 container health_status 6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  1 10:01:22 compute-0 podman[261004]: 2025-12-01 10:01:22.737476176 +0000 UTC m=+0.098270155 container health_status ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible)
Dec  1 10:01:23 compute-0 nova_compute[189491]: 2025-12-01 10:01:23.638 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 10:01:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 10:01:26.559 106659 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 10:01:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 10:01:26.559 106659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 10:01:26 compute-0 ovn_metadata_agent[106654]: 2025-12-01 10:01:26.559 106659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 10:01:27 compute-0 nova_compute[189491]: 2025-12-01 10:01:27.319 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 10:01:28 compute-0 nova_compute[189491]: 2025-12-01 10:01:28.639 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 10:01:29 compute-0 podman[203700]: time="2025-12-01T10:01:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 10:01:29 compute-0 podman[203700]: @ - - [01/Dec/2025:10:01:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28292 "" "Go-http-client/1.1"
Dec  1 10:01:29 compute-0 podman[203700]: @ - - [01/Dec/2025:10:01:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4346 "" "Go-http-client/1.1"
Dec  1 10:01:31 compute-0 openstack_network_exporter[205866]: ERROR   10:01:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 10:01:31 compute-0 openstack_network_exporter[205866]: ERROR   10:01:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 10:01:31 compute-0 openstack_network_exporter[205866]: ERROR   10:01:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 10:01:31 compute-0 openstack_network_exporter[205866]: ERROR   10:01:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 10:01:31 compute-0 openstack_network_exporter[205866]: 
Dec  1 10:01:31 compute-0 openstack_network_exporter[205866]: ERROR   10:01:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 10:01:31 compute-0 openstack_network_exporter[205866]: 
Dec  1 10:01:32 compute-0 nova_compute[189491]: 2025-12-01 10:01:32.322 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 10:01:32 compute-0 podman[261049]: 2025-12-01 10:01:32.70172547 +0000 UTC m=+0.072124421 container health_status dac4da2348936a074227ed05f3ae3798aeec55c4a5ebfc823611654d90618e30 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  1 10:01:32 compute-0 podman[261050]: 2025-12-01 10:01:32.708531638 +0000 UTC m=+0.078467578 container health_status e4882e1d1b7c67c2c4e8bcfae07626a7d34519fb8ef96b633dd58be71617b26f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  1 10:01:32 compute-0 podman[261051]: 2025-12-01 10:01:32.735245867 +0000 UTC m=+0.100892970 container health_status f2d8639519b361376780abb83cb5a5e341c4f412720fb5852b2a4d261a69c359 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, managed_by=edpm_ansible, name=ubi9, io.k8s.display-name=Red Hat Universal Base Image 9, distribution-scope=public, io.openshift.expose-services=, vendor=Red Hat, Inc., config_id=edpm, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release=1214.1726694543, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, container_name=kepler, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Dec  1 10:01:33 compute-0 nova_compute[189491]: 2025-12-01 10:01:33.642 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 10:01:37 compute-0 nova_compute[189491]: 2025-12-01 10:01:37.326 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 10:01:38 compute-0 nova_compute[189491]: 2025-12-01 10:01:38.645 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 10:01:39 compute-0 podman[261110]: 2025-12-01 10:01:39.704486524 +0000 UTC m=+0.071197938 container health_status f432f6e197c6f781e77797c4393531ed6d59401b9d0c167e1b78ebde938755ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  1 10:01:39 compute-0 podman[261109]: 2025-12-01 10:01:39.719777052 +0000 UTC m=+0.095212161 container health_status 110d829c69456678d06349e271c596b0261996ed60206ae8d123ef89c6e2daa0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, distribution-scope=public, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, release=1755695350, vcs-type=git, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, version=9.6, com.redhat.component=ubi9-minimal-container)
Dec  1 10:01:42 compute-0 nova_compute[189491]: 2025-12-01 10:01:42.329 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 10:01:43 compute-0 nova_compute[189491]: 2025-12-01 10:01:43.650 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 10:01:43 compute-0 podman[261146]: 2025-12-01 10:01:43.733506786 +0000 UTC m=+0.109126834 container health_status 5904bb54d64a769a21814fa0bfc4ef631c4a0724db7663ac488fd79c27ee72ed (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 10:01:43 compute-0 podman[261147]: 2025-12-01 10:01:43.749300845 +0000 UTC m=+0.118712770 container health_status 8b15026a3e35dd9c0761cc26658e64cb048a8074e08883f1d204ee896c7d4db4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, config_id=ovn_controller)
Dec  1 10:01:47 compute-0 nova_compute[189491]: 2025-12-01 10:01:47.334 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 10:01:47 compute-0 systemd-logind[792]: New session 31 of user zuul.
Dec  1 10:01:47 compute-0 systemd[1]: Started Session 31 of User zuul.
Dec  1 10:01:48 compute-0 nova_compute[189491]: 2025-12-01 10:01:48.651 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 10:01:52 compute-0 nova_compute[189491]: 2025-12-01 10:01:52.337 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 10:01:53 compute-0 ovs-vsctl[261360]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Dec  1 10:01:53 compute-0 nova_compute[189491]: 2025-12-01 10:01:53.654 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 10:01:53 compute-0 podman[261377]: 2025-12-01 10:01:53.710738501 +0000 UTC m=+0.081681596 container health_status ac40fb0e07b42b30d585922d4049f5bceae41937d9eab549be66f1287b1d684c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec  1 10:01:53 compute-0 nova_compute[189491]: 2025-12-01 10:01:53.714 189495 DEBUG oslo_service.periodic_task [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 10:01:53 compute-0 nova_compute[189491]: 2025-12-01 10:01:53.714 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 10:01:53 compute-0 nova_compute[189491]: 2025-12-01 10:01:53.715 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 10:01:53 compute-0 podman[261373]: 2025-12-01 10:01:53.729520954 +0000 UTC m=+0.100734266 container health_status 6c880f82acc53406c0f7df0e8ebf2efb1253956c94bf2b11bf2d8c331ff3ff62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  1 10:01:54 compute-0 nova_compute[189491]: 2025-12-01 10:01:54.033 189495 DEBUG nova.compute.manager [None req-99a204db-28be-492e-8ed5-880b0bce3867 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  1 10:01:54 compute-0 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 261218 (sos)
Dec  1 10:01:54 compute-0 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Dec  1 10:01:54 compute-0 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Dec  1 10:01:54 compute-0 virtqemud[189211]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Dec  1 10:01:54 compute-0 virtqemud[189211]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Dec  1 10:01:54 compute-0 virtqemud[189211]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Dec  1 10:01:57 compute-0 nova_compute[189491]: 2025-12-01 10:01:57.341 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 10:01:58 compute-0 nova_compute[189491]: 2025-12-01 10:01:58.656 189495 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 10:01:59 compute-0 systemd[1]: Starting Hostname Service...
Dec  1 10:01:59 compute-0 systemd[1]: Started Hostname Service.
Dec  1 10:01:59 compute-0 podman[203700]: time="2025-12-01T10:01:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 10:01:59 compute-0 podman[203700]: @ - - [01/Dec/2025:10:01:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28292 "" "Go-http-client/1.1"
Dec  1 10:01:59 compute-0 podman[203700]: @ - - [01/Dec/2025:10:01:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4352 "" "Go-http-client/1.1"
Dec  1 10:02:01 compute-0 openstack_network_exporter[205866]: ERROR   10:02:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 10:02:01 compute-0 openstack_network_exporter[205866]: ERROR   10:02:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 10:02:01 compute-0 openstack_network_exporter[205866]: ERROR   10:02:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 10:02:01 compute-0 openstack_network_exporter[205866]: ERROR   10:02:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 10:02:01 compute-0 openstack_network_exporter[205866]: 
Dec  1 10:02:01 compute-0 openstack_network_exporter[205866]: ERROR   10:02:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 10:02:01 compute-0 openstack_network_exporter[205866]: 
